diff mbox series

[RFC,21/30] python3-bcrypt: mirgrate to vendor cargo class

Message ID 20250211150034.18696-21-stefan.herbrechtsmeier-oss@weidmueller.com
State New
Headers show
Series Add vendor support for go, npm and rust | expand

Commit Message

Stefan Herbrechtsmeier Feb. 11, 2025, 3 p.m. UTC
From: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com>

Signed-off-by: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com>
---

 .../python/python3-bcrypt-crates.inc          | 84 -------------------
 .../python/python3-bcrypt_4.2.1.bb            |  4 +-
 2 files changed, 1 insertion(+), 87 deletions(-)
 delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc

Comments

Richard Purdie Feb. 11, 2025, 9:46 p.m. UTC | #1
On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier via lists.openembedded.org wrote:
> From: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com>
> 
> Signed-off-by: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com>
> ---
> 
>  .../python/python3-bcrypt-crates.inc          | 84 -------------------
>  .../python/python3-bcrypt_4.2.1.bb            |  4 +-
>  2 files changed, 1 insertion(+), 87 deletions(-)
>  delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc

So let me as the silly question. This removes the crates.inc file and
doesn't appear to add any kind of new list of locked down modules. 

This means that inspection tools just using the metadata can't see
"into" this recipe any longer for component information. This was
something that some people felt strongly that was a necessary part of
recipe metadata, for license, security and other manifest activities.

Are we basically saying that information is now only available after
the build takes place?

I'm very worried that the previous discussions didn't reach a
conclusion and this is moving the "magic" out of bitbake and into some
vendor classes without addressing the concerns previously raised about
transparency into the manifests of what is going on behind the scenes.

I appreciate some of the requirements are conflicting.

For the record in some recent meetings, I was promised that help would
be forthcoming in helping guide this discussion. I therefore left
things alone in the hope that would happen. It simply hasn't, probably
due to time/work issues, which I can sympathise with but it does mean
I'm left doing a bad job of trying to respond to your patches whilst
trying to do too many other things badly too. That leaves us both very
frustrated.

I really want to see you succeed in reworking this and I appreciate the
time and effort put into the patches. To make this successful, I know
there are key stakeholders who need to buy into it and right now,
they're more likely just to keep doing their own things as it is easier
since this isn't going the direction they want. A key piece of making
this successful is negotiating something which can work for a
significant portion of them. I'm spelling all this out since I do at
least want to make the situation clear.

Yes, I'm very upset the OE community is putting me in this position
despite me repeatedly asking for help and that isn't your fault, which
just frustrates me more.

Cheers,

Richard
Stefan Herbrechtsmeier Feb. 12, 2025, 2:36 p.m. UTC | #2
Am 11.02.2025 um 22:46 schrieb Richard Purdie:
> On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier via lists.openembedded.org wrote:
>> From: Stefan Herbrechtsmeier<stefan.herbrechtsmeier@weidmueller.com>
>>
>> Signed-off-by: Stefan Herbrechtsmeier<stefan.herbrechtsmeier@weidmueller.com>
>> ---
>>
>>   .../python/python3-bcrypt-crates.inc          | 84 -------------------
>>   .../python/python3-bcrypt_4.2.1.bb            |  4 +-
>>   2 files changed, 1 insertion(+), 87 deletions(-)
>>   delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
> So let me as the silly question. This removes the crates.inc file and
> doesn't appear to add any kind of new list of locked down modules.
The list is generated on the fly like gitsm and doesn't require an extra 
step.

> This means that inspection tools just using the metadata can't see
> "into" this recipe any longer for component information.

We support and use python code inside the variables and thereby need a 
preprocessing of the metadata in any case.

What do you mean by "component information"?

> This was
> something that some people felt strongly that was a necessary part of
> recipe metadata, for license, security and other manifest activities.

Why can't they use the SBOM for this?

> Are we basically saying that information is now only available after
> the build takes place?
They are only available after a special task run.

> I'm very worried that the previous discussions didn't reach a
> conclusion and this is moving the "magic" out of bitbake and into some
> vendor classes without addressing the concerns previously raised about
> transparency into the manifests of what is going on behind the scenes.

I try to address the concerns but don't realize that the missing 
information in the recipe is a blocker.

This version gives the user the possibility to influence the 
dependencies via patches or alternative lock file. It creates a vendor 
folder for easy patch and debug. It integrates the dependencies into the 
SBOM for security tracking.

I skipped the license topic for now because the package managers don't 
handle license integrity. We have to keep the information in the recipe 
but hopefully the license information doesn't change with each update.

I don't understand the requirement for the plain inspection. In my 
opinion external tools should always use a defined output and shouldn't 
depend on the project internal details. I adapt the existing users of 
the SRC_URI to include the dynamic SRC_URIs.

> I appreciate some of the requirements are conflicting.
>
> For the record in some recent meetings, I was promised that help would
> be forthcoming in helping guide this discussion. I therefore left
> things alone in the hope that would happen. It simply hasn't, probably
> due to time/work issues, which I can sympathise with but it does mean
> I'm left doing a bad job of trying to respond to your patches whilst
> trying to do too many other things badly too. That leaves us both very
> frustrated.
>
> I really want to see you succeed in reworking this and I appreciate the
> time and effort put into the patches. To make this successful, I know
> there are key stakeholders who need to buy into it and right now,
> they're more likely just to keep doing their own things as it is easier
> since this isn't going the direction they want. A key piece of making
> this successful is negotiating something which can work for a
> significant portion of them. I'm spelling all this out since I do at
> least want to make the situation clear.
>
> Yes, I'm very upset the OE community is putting me in this position
> despite me repeatedly asking for help and that isn't your fault, which
> just frustrates me more.

My problem is the double standards. We support a fetcher which dynamic 
resolve dependencies and without manual update step since years. Nobody 
suggests to make the gitsm fetcher obsolete and requests the users to 
run an update task after a SRC_URI change to create a .inc file with the 
SRC_URIs of all the recursive submodules. Nobody complains about the 
missing components in the recipe.

Whether we have hard requirements and introduce a git submodule support 
which satisfy the requirements or we accept the advantages of a simple 
user interface and minimize the disadvantages.

It doesn't matter if we run the resolve function inside a resolve, fetch 
or update task. The questions is do we want to support dynamic SRC_URIs 
or do we want an manual update task. The task needs to be manual run 
after a SRC_URI change and can produces a lot of noise in the update 
commit. In any case the manual editing of the SRC_URI isn't practical 
and the users will use the package manager to update dependencies and 
its recursive dependencies.

As a compromise we could add a new feature to generate .inc cache files 
before the main bitbake run. This would eliminate the manual update run 
and the commit noise as well as special fetch, unpack and patch task.
Richard Purdie Feb. 12, 2025, 3:06 p.m. UTC | #3
On Wed, 2025-02-12 at 15:36 +0100, Stefan Herbrechtsmeier wrote:
> My problem is the double standards. We support a fetcher which
> dynamic resolve dependencies and without manual update step since
> years. Nobody suggests to make the gitsm fetcher obsolete and
> requests the users to run an update task after a SRC_URI change to
> create a .inc file with the SRC_URIs of all the recursive submodules.
> Nobody complains about the missing components in the recipe.

FWIW there have been problems and complaints about gitsm. There were
some people who explicitly converted gitsm recipes into git urls due to
issues with the fetcher.

We have on the most part dealt with the worst issues in gitsm now but
there are still some who don't use it.

With git there is at least a relatively wide understanding of the
technology. With many other areas like node, go and rust, the tools are
much younger without some of the features we sometimes need, at least
in the past and people's overall trust is much lower as a result.

At this point I'm not sure there would be demand to change gitsm but
there are still the concerns about that approach.

>  Whether we have hard requirements and introduce a git submodule
> support which satisfy the requirements or we accept the advantages of
> a simple user interface and minimize the disadvantages.
>
>  It doesn't matter if we run the resolve function inside a resolve,
> fetch or update task. The questions is do we want to support dynamic
> SRC_URIs or do we want an manual update task. The task needs to be
> manual run after a SRC_URI change and can produces a lot of noise in
> the update commit. In any case the manual editing of the SRC_URI
> isn't practical and the users will use the package manager to update
> dependencies and its recursive dependencies.
>  
> As a compromise we could add a new feature to generate .inc cache
> files before the main bitbake run. This would eliminate the manual
> update run and the commit noise as well as special fetch, unpack and
> patch task.

I've partly being trying to channel some of the feelings I've been
hearing expressed in different places, as I do want any new solution to
meet the needs of the widest group we can. If the .inc is generated but
not checked in (to remove the commit noise), I suspect it loses some of
the value that people have wanted.

I'm personally very torn. I get pushed many different ways, which
include robustness, simplicity and complexity reduction, performance
and various extremes of auditing. I feel I'm doing a bad job of trying
to represent everything :(.

I do see that there are basically two conflicting approaches and I'm
not sure everyone is going to be happy with either. This does worry me
a lot and puts me in a difficult place too.

Cheers,

Richard
Bruce Ashfield Feb. 12, 2025, 3:07 p.m. UTC | #4
On Wed, Feb 12, 2025 at 9:36 AM Stefan Herbrechtsmeier via
lists.openembedded.org <stefan.herbrechtsmeier-oss=
weidmueller.com@lists.openembedded.org> wrote:

> Am 11.02.2025 um 22:46 schrieb Richard Purdie:
>
> On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier via lists.openembedded.org wrote:
>
> From: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com> <stefan.herbrechtsmeier@weidmueller.com>
>
> Signed-off-by: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com> <stefan.herbrechtsmeier@weidmueller.com>
> ---
>
>  .../python/python3-bcrypt-crates.inc          | 84 -------------------
>  .../python/python3-bcrypt_4.2.1.bb            |  4 +-
>  2 files changed, 1 insertion(+), 87 deletions(-)
>  delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
>
> So let me as the silly question. This removes the crates.inc file and
> doesn't appear to add any kind of new list of locked down modules.
>
> The list is generated on the fly like gitsm and doesn't require an extra
> step.
>
> This means that inspection tools just using the metadata can't see
> "into" this recipe any longer for component information.
>
> We support and use python code inside the variables and thereby need a
> preprocessing of the metadata in any case.
>
> What do you mean by "component information"?
>
> This was
> something that some people felt strongly that was a necessary part of
> recipe metadata, for license, security and other manifest activities.
>
> Why can't they use the SBOM for this?
>
> Are we basically saying that information is now only available after
> the build takes place?
>
> They are only available after a special task run.
>
> I'm very worried that the previous discussions didn't reach a
> conclusion and this is moving the "magic" out of bitbake and into some
> vendor classes without addressing the concerns previously raised about
> transparency into the manifests of what is going on behind the scenes.
>
> I try to address the concerns but don't realize that the missing
> information in the recipe is a blocker.
>
> This version gives the user the possibility to influence the dependencies
> via patches or alternative lock file. It creates a vendor folder for easy
> patch and debug. It integrates the dependencies into the SBOM for security
> tracking.
>
> I skipped the license topic for now because the package managers don't
> handle license integrity. We have to keep the information in the recipe but
> hopefully the license information doesn't change with each update.
>
> I don't understand the requirement for the plain inspection. In my opinion
> external tools should always use a defined output and shouldn't depend on
> the project internal details. I adapt the existing users of the SRC_URI to
> include the dynamic SRC_URIs.
>
> I appreciate some of the requirements are conflicting.
>
> For the record in some recent meetings, I was promised that help would
> be forthcoming in helping guide this discussion. I therefore left
> things alone in the hope that would happen. It simply hasn't, probably
> due to time/work issues, which I can sympathise with but it does mean
> I'm left doing a bad job of trying to respond to your patches whilst
> trying to do too many other things badly too. That leaves us both very
> frustrated.
>
> I really want to see you succeed in reworking this and I appreciate the
> time and effort put into the patches. To make this successful, I know
> there are key stakeholders who need to buy into it and right now,
> they're more likely just to keep doing their own things as it is easier
> since this isn't going the direction they want. A key piece of making
> this successful is negotiating something which can work for a
> significant portion of them. I'm spelling all this out since I do at
> least want to make the situation clear.
>
> Yes, I'm very upset the OE community is putting me in this position
> despite me repeatedly asking for help and that isn't your fault, which
> just frustrates me more.
>
> My problem is the double standards. We support a fetcher which dynamic
> resolve dependencies and without manual update step since years. Nobody
> suggests to make the gitsm fetcher obsolete and requests the users to run
> an update task after a SRC_URI change to create a .inc file with the
> SRC_URIs of all the recursive submodules. Nobody complains about the
> missing components in the recipe.
>
There's no double standard, I'd simply say that design decisions of the
past doesn't mean that there aren't better ways to do something new.

Richard went out of his way to explain the status and what sort of review
needs to happen, I'll add that while getting frustrated with it is natural,
pushing back on people doing reviews isn't going to help get things merged,
it will do the opposite.

There have been plenty of complaints and issues with the gitsm fetcher, but
the reality is that if someone wants to get at the base components of what
it is doing, they can do so. I've had to take several of my maintained
recipes out of gitsm and back to the base git fetches. The submodules were
simply fetching code that didn't build and there was no way to fetch it.
The gitsm fetcher is also relatively lightly used, much less complicated
and doesn't need much extra in infrastructure to support it.


> Whether we have hard requirements and introduce a git submodule support
> which satisfy the requirements or we accept the advantages of a simple user
> interface and minimize the disadvantages.
>
Unfortunately in my experience the simple interfaces hiding complexity
don't help when things go wrong. That's how I ended up where I am with my
go recipes, and why I ended up tearing my gitsm recipe back into its
components. There was no way to influence / fix the build otherwise, and
they didn't support bleeding edge development very well.

I'm definitely one of the people Richard is mentioning as a stakeholder,
and one that could likely just ignore all of this .. but I'm attempting to
wade into it again.

None of us have the hands on, daily experience with the components at play
as you do right now, so patience on your part will be needed as we ask many
not-so-intelligent questions.


> It doesn't matter if we run the resolve function inside a resolve, fetch
> or update task. The questions is do we want to support dynamic SRC_URIs or
> do we want an manual update task. The task needs to be manual run after a
> SRC_URI change and can produces a lot of noise in the update commit. In any
> case the manual editing of the SRC_URI isn't practical and the users will
> use the package manager to update dependencies and its recursive
> dependencies.
>
I don't understand the series quite enough yet to say "why can't we do
both", if there was a way to abstract / componentize what is generating
those dynamic SCR_URIS in such a way that an external tool or update task
could generate them, and if they were already in place the dynamic
generation wouldn't run at build time, that should keep both modes working.

I admit to not understanding why we'd be overly concerned about noise in
the commits (for the dependencies) if they are split into separate files in
the recipe. More information is always better when I'm dealing with the
updates. I just scroll past it if I'm not interested and filter it if I am.

I feel the pain (and your pain) of this after supporting complicated
go/mixed language recipes through multiple major releases (and through go's
changing dependency model + bleeding edge code, etc) and needing to track
what has changed, so I definitely encourage you to keep working on this.

As a compromise we could add a new feature to generate .inc cache files
> before the main bitbake run. This would eliminate the manual update run and
> the commit noise as well as special fetch, unpack and patch task.
>
> Can you elaborate on what you mean by before the main bitbake run ? Would
it be still under a single bitbake invokation or would it be multiple runs
(I support multiple runs, so don't take that as a leading question).

Bruce

>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#211243):
> https://lists.openembedded.org/g/openembedded-core/message/211243
> Mute This Topic: https://lists.openembedded.org/mt/111123548/1050810
> Group Owner: openembedded-core+owner@lists.openembedded.org
> Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [
> bruce.ashfield@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
Stefan Herbrechtsmeier Feb. 12, 2025, 5:24 p.m. UTC | #5
Am 12.02.2025 um 16:07 schrieb Bruce Ashfield:
> On Wed, Feb 12, 2025 at 9:36 AM Stefan Herbrechtsmeier via 
> lists.openembedded.org <http://lists.openembedded.org> 
> <stefan.herbrechtsmeier-oss=weidmueller.com@lists.openembedded.org> wrote:
>
>     Am 11.02.2025 um 22:46 schrieb Richard Purdie:
>>     On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier vialists.openembedded.org <http://lists.openembedded.org> wrote:
>>>     From: Stefan Herbrechtsmeier<stefan.herbrechtsmeier@weidmueller.com> <mailto:stefan.herbrechtsmeier@weidmueller.com>
>>>
>>>     Signed-off-by: Stefan Herbrechtsmeier<stefan.herbrechtsmeier@weidmueller.com> <mailto:stefan.herbrechtsmeier@weidmueller.com>
>>>     ---
>>>
>>>       .../python/python3-bcrypt-crates.inc          | 84 -------------------
>>>       .../python/python3-bcrypt_4.2.1.bb <http://python3-bcrypt_4.2.1.bb>            |  4 +-
>>>       2 files changed, 1 insertion(+), 87 deletions(-)
>>>       delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
>>     So let me as the silly question. This removes the crates.inc file and
>>     doesn't appear to add any kind of new list of locked down modules.
>     The list is generated on the fly like gitsm and doesn't require an
>     extra step.
>
>>     This means that inspection tools just using the metadata can't see
>>     "into" this recipe any longer for component information.
>
>     We support and use python code inside the variables and thereby
>     need a preprocessing of the metadata in any case.
>
>     What do you mean by "component information"?
>
>>     This was
>>     something that some people felt strongly that was a necessary part of
>>     recipe metadata, for license, security and other manifest activities.
>
>     Why can't they use the SBOM for this?
>
>>     Are we basically saying that information is now only available after
>>     the build takes place?
>     They are only available after a special task run.
>
>>     I'm very worried that the previous discussions didn't reach a
>>     conclusion and this is moving the "magic" out of bitbake and into some
>>     vendor classes without addressing the concerns previously raised about
>>     transparency into the manifests of what is going on behind the scenes.
>
>     I try to address the concerns but don't realize that the missing
>     information in the recipe is a blocker.
>
>     This version gives the user the possibility to influence the
>     dependencies via patches or alternative lock file. It creates a
>     vendor folder for easy patch and debug. It integrates the
>     dependencies into the SBOM for security tracking.
>
>     I skipped the license topic for now because the package managers
>     don't handle license integrity. We have to keep the information in
>     the recipe but hopefully the license information doesn't change
>     with each update.
>
>     I don't understand the requirement for the plain inspection. In my
>     opinion external tools should always use a defined output and
>     shouldn't depend on the project internal details. I adapt the
>     existing users of the SRC_URI to include the dynamic SRC_URIs.
>
>>     I appreciate some of the requirements are conflicting.
>>
>>     For the record in some recent meetings, I was promised that help would
>>     be forthcoming in helping guide this discussion. I therefore left
>>     things alone in the hope that would happen. It simply hasn't, probably
>>     due to time/work issues, which I can sympathise with but it does mean
>>     I'm left doing a bad job of trying to respond to your patches whilst
>>     trying to do too many other things badly too. That leaves us both very
>>     frustrated.
>>
>>     I really want to see you succeed in reworking this and I appreciate the
>>     time and effort put into the patches. To make this successful, I know
>>     there are key stakeholders who need to buy into it and right now,
>>     they're more likely just to keep doing their own things as it is easier
>>     since this isn't going the direction they want. A key piece of making
>>     this successful is negotiating something which can work for a
>>     significant portion of them. I'm spelling all this out since I do at
>>     least want to make the situation clear.
>>
>>     Yes, I'm very upset the OE community is putting me in this position
>>     despite me repeatedly asking for help and that isn't your fault, which
>>     just frustrates me more.
>
>     My problem is the double standards. We support a fetcher which
>     dynamic resolve dependencies and without manual update step since
>     years. Nobody suggests to make the gitsm fetcher obsolete and
>     requests the users to run an update task after a SRC_URI change to
>     create a .inc file with the SRC_URIs of all the recursive
>     submodules. Nobody complains about the missing components in the
>     recipe.
>
> There's no double standard, I'd simply say that design decisions of 
> the past doesn't mean that there aren't better ways to do something new.
>
> Richard went out of his way to explain the status and what sort of 
> review needs to happen, I'll add that while getting frustrated with it 
> is natural, pushing back on people doing reviews isn't going to help 
> get things merged, it will do the opposite.
>
> There have been plenty of complaints and issues with the gitsm 
> fetcher, but the reality is that if someone wants to get at the base 
> components of what it is doing, they can do so. I've had to take 
> several of my maintained recipes out of gitsm and back to the base git 
> fetches. The submodules were simply fetching code that didn't build 
> and there was no way to fetch it.  The gitsm fetcher is also 
> relatively lightly used, much less complicated and doesn't need much 
> extra in infrastructure to support it.

Thanks for your insides. There a two main solutions for the problem. Add 
patch support to gitsm so that you could use the git submodule command 
and create a patch or generate a .inc file and manipulate the SRCREVs. I 
assume you would prefer a .inc file. What do you think is the downside 
of a patch?

>
>     Whether we have hard requirements and introduce a git submodule
>     support which satisfy the requirements or we accept the advantages
>     of a simple user interface and minimize the disadvantages.
>
> Unfortunately in my experience the simple interfaces hiding complexity 
> don't help when things go wrong. That's how I ended up where I am with 
> my go recipes, and why I ended up tearing my gitsm recipe back into 
> its components. There was no way to influence / fix the build 
> otherwise, and they didn't support bleeding edge development very well.
Do you have a good example for a problematic go recipe to test my approach?

> I'm definitely one of the people Richard is mentioning as a 
> stakeholder, and one that could likely just ignore all of this .. but 
> I'm attempting to wade into it again.

I am very grateful for that.

>
> None of us have the hands on, daily experience with the components at 
> play as you do right now, so patience on your part will be needed as 
> we ask many not-so-intelligent questions.

That's no problem.

>
>     It doesn't matter if we run the resolve function inside a resolve,
>     fetch or update task. The questions is do we want to support
>     dynamic SRC_URIs or do we want an manual update task. The task
>     needs to be manual run after a SRC_URI change and can produces a
>     lot of noise in the update commit. In any case the manual editing
>     of the SRC_URI isn't practical and the users will use the package
>     manager to update dependencies and its recursive dependencies.
>
> I don't understand the series quite enough yet to say "why can't we do 
> both", if there was a way to abstract / componentize what is 
> generating those dynamic SCR_URIS in such a way that an external tool 
> or update task could generate them, and if they were already in place 
> the dynamic generation wouldn't run at build time, that should keep 
> both modes working.
If it is desired I can add both variants.


> I admit to not understanding why we'd be overly concerned about noise 
> in the commits (for the dependencies) if they are split into separate 
> files in the recipe. More information is always better when I'm 
> dealing with the updates. I just scroll past it if I'm not interested 
> and filter it if I am.
The problem is to identify the relevant parts. Lets say you update a 
dependency because of an security issue. Afterwards you update the 
project with a lot of dependency changes. You have to review the 
complete noise to determine if your updated dependency doesn't go 
backward in its version. It is much easier to use a patch. After the 
project update the patch will fail or not. If it fails you have a direct 
focus on the affected dependency. If you back port the patch from the 
project you could simple drop it with the next update.

> I feel the pain (and your pain) of this after supporting complicated 
> go/mixed language recipes through multiple major releases (and through 
> go's changing dependency model + bleeding edge code, etc) and needing 
> to track what has changed, so I definitely encourage you to keep 
> working on this.
>
>     As a compromise we could add a new feature to generate .inc cache
>     files before the main bitbake run. This would eliminate the manual
>     update run and the commit noise as well as special fetch, unpack
>     and patch task.
>
> Can you elaborate on what you mean by before the main bitbake run ? 
> Would it be still under a single bitbake invokation or would it be 
> multiple runs (I support multiple runs, so don't take that as a 
> leading question).

I can't answer this questions and need Richard guidance to implement 
such a feature. I would assume that bitbake already track file changes 
and can update its state. The behavior should be similar to a change in 
the .inc file. Bitbake will detect that a "include_cache" file is 
missing and run an update_cache task on the recipe. Afterwards bitbake 
detect a file change on the "include_cache" file and parse it. We need a 
possibility to mark patches which shouldn't be applied if the 
"include_cache" file is missing because the dependencies are missing. We 
need to run the fetch, unpack and patch task before the update_cache 
task to generate the .inc file.
Stefan Herbrechtsmeier Feb. 12, 2025, 5:27 p.m. UTC | #6
Am 12.02.2025 um 16:06 schrieb Richard Purdie:
> On Wed, 2025-02-12 at 15:36 +0100, Stefan Herbrechtsmeier wrote:
>> My problem is the double standards. We support a fetcher which
>> dynamic resolve dependencies and without manual update step since
>> years. Nobody suggests to make the gitsm fetcher obsolete and
>> requests the users to run an update task after a SRC_URI change to
>> create a .inc file with the SRC_URIs of all the recursive submodules.
>> Nobody complains about the missing components in the recipe.
> FWIW there have been problems and complaints about gitsm. There were
> some people who explicitly converted gitsm recipes into git urls due to
> issues with the fetcher.
>
> We have on the most part dealt with the worst issues in gitsm now but
> there are still some who don't use it.
>
> With git there is at least a relatively wide understanding of the
> technology. With many other areas like node, go and rust, the tools are
> much younger without some of the features we sometimes need, at least
> in the past and people's overall trust is much lower as a result.
>
> At this point I'm not sure there would be demand to change gitsm but
> there are still the concerns about that approach.

>>   Whether we have hard requirements and introduce a git submodule
>> support which satisfy the requirements or we accept the advantages of
>> a simple user interface and minimize the disadvantages.
>>
>>   It doesn't matter if we run the resolve function inside a resolve,
>> fetch or update task. The questions is do we want to support dynamic
>> SRC_URIs or do we want an manual update task. The task needs to be
>> manual run after a SRC_URI change and can produces a lot of noise in
>> the update commit. In any case the manual editing of the SRC_URI
>> isn't practical and the users will use the package manager to update
>> dependencies and its recursive dependencies.
>>   
>> As a compromise we could add a new feature to generate .inc cache
>> files before the main bitbake run. This would eliminate the manual
>> update run and the commit noise as well as special fetch, unpack and
>> patch task.
> I've partly being trying to channel some of the feelings I've been
> hearing expressed in different places, as I do want any new solution to
> meet the needs of the widest group we can. If the .inc is generated but
> not checked in (to remove the commit noise), I suspect it loses some of
> the value that people have wanted.

It would be helpful if more people participation in the discussion. It 
is hard to understand why the .inc is needed if the lock file contains 
the same information and common tools exists to manipulate the lock file.

> I'm personally very torn. I get pushed many different ways, which
> include robustness, simplicity and complexity reduction, performance
> and various extremes of auditing. I feel I'm doing a bad job of trying
> to represent everything :(.
I'm sorry for that.

> I do see that there are basically two conflicting approaches and I'm
> not sure everyone is going to be happy with either. This does worry me
> a lot and puts me in a difficult place too.

Would you accept both solutions?
Bruce Ashfield Feb. 12, 2025, 5:45 p.m. UTC | #7
On Wed, Feb 12, 2025 at 12:24 PM Stefan Herbrechtsmeier <
stefan.herbrechtsmeier-oss@weidmueller.com> wrote:

> Am 12.02.2025 um 16:07 schrieb Bruce Ashfield:
>
> On Wed, Feb 12, 2025 at 9:36 AM Stefan Herbrechtsmeier via
> lists.openembedded.org <stefan.herbrechtsmeier-oss=
> weidmueller.com@lists.openembedded.org> wrote:
>
>> Am 11.02.2025 um 22:46 schrieb Richard Purdie:
>>
>> On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier via lists.openembedded.org wrote:
>>
>> From: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com> <stefan.herbrechtsmeier@weidmueller.com>
>>
>> Signed-off-by: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com> <stefan.herbrechtsmeier@weidmueller.com>
>> ---
>>
>>  .../python/python3-bcrypt-crates.inc          | 84 -------------------
>>  .../python/python3-bcrypt_4.2.1.bb            |  4 +-
>>  2 files changed, 1 insertion(+), 87 deletions(-)
>>  delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
>>
>> So let me as the silly question. This removes the crates.inc file and
>> doesn't appear to add any kind of new list of locked down modules.
>>
>> The list is generated on the fly like gitsm and doesn't require an extra
>> step.
>>
>> This means that inspection tools just using the metadata can't see
>> "into" this recipe any longer for component information.
>>
>> We support and use python code inside the variables and thereby need a
>> preprocessing of the metadata in any case.
>>
>> What do you mean by "component information"?
>>
>> This was
>> something that some people felt strongly that was a necessary part of
>> recipe metadata, for license, security and other manifest activities.
>>
>> Why can't they use the SBOM for this?
>>
>> Are we basically saying that information is now only available after
>> the build takes place?
>>
>> They are only available after a special task run.
>>
>> I'm very worried that the previous discussions didn't reach a
>> conclusion and this is moving the "magic" out of bitbake and into some
>> vendor classes without addressing the concerns previously raised about
>> transparency into the manifests of what is going on behind the scenes.
>>
>> I try to address the concerns but don't realize that the missing
>> information in the recipe is a blocker.
>>
>> This version gives the user the possibility to influence the dependencies
>> via patches or alternative lock file. It creates a vendor folder for easy
>> patch and debug. It integrates the dependencies into the SBOM for security
>> tracking.
>>
>> I skipped the license topic for now because the package managers don't
>> handle license integrity. We have to keep the information in the recipe but
>> hopefully the license information doesn't change with each update.
>>
>> I don't understand the requirement for the plain inspection. In my
>> opinion external tools should always use a defined output and shouldn't
>> depend on the project internal details. I adapt the existing users of the
>> SRC_URI to include the dynamic SRC_URIs.
>>
>> I appreciate some of the requirements are conflicting.
>>
>> For the record in some recent meetings, I was promised that help would
>> be forthcoming in helping guide this discussion. I therefore left
>> things alone in the hope that would happen. It simply hasn't, probably
>> due to time/work issues, which I can sympathise with but it does mean
>> I'm left doing a bad job of trying to respond to your patches whilst
>> trying to do too many other things badly too. That leaves us both very
>> frustrated.
>>
>> I really want to see you succeed in reworking this and I appreciate the
>> time and effort put into the patches. To make this successful, I know
>> there are key stakeholders who need to buy into it and right now,
>> they're more likely just to keep doing their own things as it is easier
>> since this isn't going the direction they want. A key piece of making
>> this successful is negotiating something which can work for a
>> significant portion of them. I'm spelling all this out since I do at
>> least want to make the situation clear.
>>
>> Yes, I'm very upset the OE community is putting me in this position
>> despite me repeatedly asking for help and that isn't your fault, which
>> just frustrates me more.
>>
>> My problem is the double standards. We support a fetcher which dynamic
>> resolve dependencies and without manual update step since years. Nobody
>> suggests to make the gitsm fetcher obsolete and requests the users to run
>> an update task after a SRC_URI change to create a .inc file with the
>> SRC_URIs of all the recursive submodules. Nobody complains about the
>> missing components in the recipe.
>>
> There's no double standard, I'd simply say that design decisions of the
> past doesn't mean that there aren't better ways to do something new.
>
> Richard went out of his way to explain the status and what sort of review
> needs to happen, I'll add that while getting frustrated with it is natural,
> pushing back on people doing reviews isn't going to help get things merged,
> it will do the opposite.
>
> There have been plenty of complaints and issues with the gitsm fetcher,
> but the reality is that if someone wants to get at the base components of
> what it is doing, they can do so. I've had to take several of my maintained
> recipes out of gitsm and back to the base git fetches. The submodules were
> simply fetching code that didn't build and there was no way to fetch it.
> The gitsm fetcher is also relatively lightly used, much less complicated
> and doesn't need much extra in infrastructure to support it.
>
> Thanks for your insides. There a two main solutions for the problem. Add
> patch support to gitsm so that you could use the git submodule command and
> create a patch or generate a .inc file and manipulate the SRCREVs. I assume
> you would prefer a .inc file. What do you think is the downside of a patch?
>

It is much easier to get a complete view of the file with a drop-in, versus
a patch (depending on the size of the file). You need to know the base
directory, the depth, put in an upstream-status, create it, copy it to your
layer, etc. With a drop-in lock file, I copy it out, edit it, and add it to
my SRC_URI.  Not much difference in the end, but I prefer the drop-in
approach on pretty much any configuration file in my builds, not just this
example.



>
>
>> Whether we have hard requirements and introduce a git submodule support
>> which satisfy the requirements or we accept the advantages of a simple user
>> interface and minimize the disadvantages.
>>
> Unfortunately in my experience the simple interfaces hiding complexity
> don't help when things go wrong. That's how I ended up where I am with my
> go recipes, and why I ended up tearing my gitsm recipe back into its
> components. There was no way to influence / fix the build otherwise, and
> they didn't support bleeding edge development very well.
>
> Do you have a good example for a problematic go recipe to test my approach?
>

Not right now, the current state is relatively stable as I'm working
towards the LTS release. These have just popped up repeatedly over the
maybe 5+ years (I can't remember how long it has been!) in maintaining
meta-virtualization. I have no doubts (and am not implying) that your
series could adapt my recipes and use all of the go mod infrastructure ..
just with all of the vendoring and go mod efforts over the years, going all
the way back to the actual source code gave a lot more visibility into the
vendor dependencies (and not just what was released for them) and I've used
it many times while debugging runtime issues of the container stacks.


> I'm definitely one of the people Richard is mentioning as a stakeholder,
> and one that could likely just ignore all of this .. but I'm attempting to
> wade into it again.
>
> I am very grateful for that.
>
>
> None of us have the hands on, daily experience with the components at play
> as you do right now, so patience on your part will be needed as we ask many
> not-so-intelligent questions.
>
> That's no problem.
>
>
>> It doesn't matter if we run the resolve function inside a resolve, fetch
>> or update task. The questions is do we want to support dynamic SRC_URIs or
>> do we want an manual update task. The task needs to be manual run after a
>> SRC_URI change and can produces a lot of noise in the update commit. In any
>> case the manual editing of the SRC_URI isn't practical and the users will
>> use the package manager to update dependencies and its recursive
>> dependencies.
>>
> I don't understand the series quite enough yet to say "why can't we do
> both", if there was a way to abstract / componentize what is generating
> those dynamic SCR_URIS in such a way that an external tool or update task
> could generate them, and if they were already in place the dynamic
> generation wouldn't run at build time, that should keep both modes working.
>
> If it is desired I can add both variants.
>

Forcing one approach over the other isn't really going to make it mergeable
(or maybe I should say make it adopted by all).

We need to help developers as well as people just doing "load build" for
distros (that should be in a steady state).  I'm under no illusion that the
way I handle the go recipes won't work for everyone either, so I definitely
wouldn't propose it as such.

>
>
>
> I admit to not understanding why we'd be overly concerned about noise in
> the commits (for the dependencies) if they are split into separate files in
> the recipe. More information is always better when I'm dealing with the
> updates. I just scroll past it if I'm not interested and filter it if I am.
>
> The problem is to identify the relevant parts. Lets say you update a
> dependency because of an security issue. Afterwards you update the project
> with a lot of dependency changes. You have to review the complete noise to
> determine if your updated dependency doesn't go backward in its version. It
> is much easier to use a patch. After the project update the patch will fail
> or not. If it fails you have a direct focus on the affected dependency. If
> you back port the patch from the project you could simple drop it with the
> next update.
>

I prefer all of the extra information, but maybe that's my kernel
background. It's the same reason why all my recipe / version updates
contain all the short logs between the releases. I can just skip / ignore
the information most of the time, but it comes in handy when looking for a
security issue, etc.

In my go recipes, I have a look at the dependency SRCREVS between updates.
In particular if I'm debugging a runtime issue, that helps me quickly see
if a dependency changed and by how much it changed.

I could do the same with a drop-in lockfile that was in my recipe, since
I'd have the delta readily available and could see the source revisions,
etc.

Of course if a drop-in file was used, we'd want some sort of hash for the
original file it was clobbering, since that indicates an update would be
required (or dropping it, etc).


>
> I feel the pain (and your pain) of this after supporting complicated
> go/mixed language recipes through multiple major releases (and through go's
> changing dependency model + bleeding edge code, etc) and needing to track
> what has changed, so I definitely encourage you to keep working on this.
>
> As a compromise we could add a new feature to generate .inc cache files
>> before the main bitbake run. This would eliminate the manual update run and
>> the commit noise as well as special fetch, unpack and patch task.
>>
>> Can you elaborate on what you mean by before the main bitbake run ? Would
> it be still under a single bitbake invokation or would it be multiple runs
> (I support multiple runs, so don't take that as a leading question).
>
> I can't answer this questions and need Richard guidance to implement such
> a feature. I would assume that bitbake already track file changes and can
> update its state. The behavior should be similar to a change in the .inc
> file. Bitbake will detect that a "include_cache" file is missing and run an
> update_cache task on the recipe. Afterwards bitbake detect a file change on
> the "include_cache" file and parse it. We need a possibility to mark
> patches which shouldn't be applied if the "include_cache" file is missing
> because the dependencies are missing. We need to run the fetch, unpack and
> patch task before the update_cache task to generate the .inc file.
>
Aha. Maybe Richard will comment later. I was thinking more about something
that was in two distinct phases, but with some more thinking and
explanation, maybe this is workable as well.

Bruce
Richard Purdie Feb. 12, 2025, 5:52 p.m. UTC | #8
On Wed, 2025-02-12 at 12:45 -0500, Bruce Ashfield wrote:
> On Wed, Feb 12, 2025 at 12:24 PM Stefan Herbrechtsmeier
> <stefan.herbrechtsmeier-oss@weidmueller.com> wrote:
> 
> > 
> > > 
> > > > 
> > > > As a compromise we could add a new feature to generate .inc
> > > > cache files before the main bitbake run. This would eliminate
> > > > the manual update run and the commit noise as well as special
> > > > fetch, unpack and patch task.
> > > >   
> > > Can you elaborate on what you mean by before the main bitbake run
> > > ? Would it be still under a single bitbake invokation or would it
> > > be multiple runs (I support multiple runs, so don't take that as
> > > a leading question).
> > >  
> > I can't answer this questions and need Richard guidance to
> > implement such a feature. I would assume that bitbake already track
> > file changes and can update its state. The behavior should be
> > similar to a change in the .inc file. Bitbake will detect that a
> > "include_cache" file is missing and run an update_cache task on the
> > recipe. Afterwards bitbake detect a file change on the
> > "include_cache" file and parse it. We need a possibility to mark
> > patches which shouldn't be applied if the "include_cache" file is
> > missing because the dependencies are missing. We need to run the
> > fetch, unpack and patch task before the update_cache task to
> > generate the .inc file.
> >  
> Aha. Maybe Richard will comment later. I was thinking more about
> something that was in two distinct phases, but with some more
> thinking and explanation, maybe this is workable as well.

We don't have anything in bitbake which could support this currently.
We parse once, store a subset of the data and then have all the
information bitbake needs. There is no second parse something else
later in the task graph concept or support, or support for "generate
this file if missing", particularly if generating that file would
require other tasks to run.

That said, I am wondering if we need to do something differently here,
it is obviously just going to complicate things a lot if we need new
concepts and API at that level in bitbake.

Cheers,

Richard
Stefan Herbrechtsmeier Feb. 13, 2025, 12:45 p.m. UTC | #9
Am 12.02.2025 um 18:45 schrieb Bruce Ashfield:
>
>
> On Wed, Feb 12, 2025 at 12:24 PM Stefan Herbrechtsmeier 
> <stefan.herbrechtsmeier-oss@weidmueller.com> wrote:
>
>     Am 12.02.2025 um 16:07 schrieb Bruce Ashfield:
>>     On Wed, Feb 12, 2025 at 9:36 AM Stefan Herbrechtsmeier via
>>     lists.openembedded.org <http://lists.openembedded.org>
>>     <stefan.herbrechtsmeier-oss=weidmueller.com@lists.openembedded.org>
>>     wrote:
>>
>>         Am 11.02.2025 um 22:46 schrieb Richard Purdie:
>>>         On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier vialists.openembedded.org <http://lists.openembedded.org> wrote:
>>>>         From: Stefan Herbrechtsmeier<stefan.herbrechtsmeier@weidmueller.com> <mailto:stefan.herbrechtsmeier@weidmueller.com>
>>>>
>>>>         Signed-off-by: Stefan Herbrechtsmeier<stefan.herbrechtsmeier@weidmueller.com> <mailto:stefan.herbrechtsmeier@weidmueller.com>
>>>>         ---
>>>>
>>>>           .../python/python3-bcrypt-crates.inc          | 84 -------------------
>>>>           .../python/python3-bcrypt_4.2.1.bb <http://python3-bcrypt_4.2.1.bb>            |  4 +-
>>>>           2 files changed, 1 insertion(+), 87 deletions(-)
>>>>           delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
>>>         So let me as the silly question. This removes the crates.inc file and
>>>         doesn't appear to add any kind of new list of locked down modules.
>>         The list is generated on the fly like gitsm and doesn't
>>         require an extra step.
>>
>>>         This means that inspection tools just using the metadata can't see
>>>         "into" this recipe any longer for component information.
>>
>>         We support and use python code inside the variables and
>>         thereby need a preprocessing of the metadata in any case.
>>
>>         What do you mean by "component information"?
>>
>>>         This was
>>>         something that some people felt strongly that was a necessary part of
>>>         recipe metadata, for license, security and other manifest activities.
>>
>>         Why can't they use the SBOM for this?
>>
>>>         Are we basically saying that information is now only available after
>>>         the build takes place?
>>         They are only available after a special task run.
>>
>>>         I'm very worried that the previous discussions didn't reach a
>>>         conclusion and this is moving the "magic" out of bitbake and into some
>>>         vendor classes without addressing the concerns previously raised about
>>>         transparency into the manifests of what is going on behind the scenes.
>>
>>         I try to address the concerns but don't realize that the
>>         missing information in the recipe is a blocker.
>>
>>         This version gives the user the possibility to influence the
>>         dependencies via patches or alternative lock file. It creates
>>         a vendor folder for easy patch and debug. It integrates the
>>         dependencies into the SBOM for security tracking.
>>
>>         I skipped the license topic for now because the package
>>         managers don't handle license integrity. We have to keep the
>>         information in the recipe but hopefully the license
>>         information doesn't change with each update.
>>
>>         I don't understand the requirement for the plain inspection.
>>         In my opinion external tools should always use a defined
>>         output and shouldn't depend on the project internal details.
>>         I adapt the existing users of the SRC_URI to include the
>>         dynamic SRC_URIs.
>>
>>>         I appreciate some of the requirements are conflicting.
>>>
>>>         For the record in some recent meetings, I was promised that help would
>>>         be forthcoming in helping guide this discussion. I therefore left
>>>         things alone in the hope that would happen. It simply hasn't, probably
>>>         due to time/work issues, which I can sympathise with but it does mean
>>>         I'm left doing a bad job of trying to respond to your patches whilst
>>>         trying to do too many other things badly too. That leaves us both very
>>>         frustrated.
>>>
>>>         I really want to see you succeed in reworking this and I appreciate the
>>>         time and effort put into the patches. To make this successful, I know
>>>         there are key stakeholders who need to buy into it and right now,
>>>         they're more likely just to keep doing their own things as it is easier
>>>         since this isn't going the direction they want. A key piece of making
>>>         this successful is negotiating something which can work for a
>>>         significant portion of them. I'm spelling all this out since I do at
>>>         least want to make the situation clear.
>>>
>>>         Yes, I'm very upset the OE community is putting me in this position
>>>         despite me repeatedly asking for help and that isn't your fault, which
>>>         just frustrates me more.
>>
>>         My problem is the double standards. We support a fetcher
>>         which dynamic resolve dependencies and without manual update
>>         step since years. Nobody suggests to make the gitsm fetcher
>>         obsolete and requests the users to run an update task after a
>>         SRC_URI change to create a .inc file with the SRC_URIs of all
>>         the recursive submodules. Nobody complains about the missing
>>         components in the recipe.
>>
>>     There's no double standard, I'd simply say that design decisions
>>     of the past doesn't mean that there aren't better ways to do
>>     something new.
>>
>>     Richard went out of his way to explain the status and what sort
>>     of review needs to happen, I'll add that while getting frustrated
>>     with it is natural, pushing back on people doing reviews isn't
>>     going to help get things merged, it will do the opposite.
>>
>>     There have been plenty of complaints and issues with the gitsm
>>     fetcher, but the reality is that if someone wants to get at the
>>     base components of what it is doing, they can do so. I've had to
>>     take several of my maintained recipes out of gitsm and back to
>>     the base git fetches. The submodules were simply fetching code
>>     that didn't build and there was no way to fetch it.  The gitsm
>>     fetcher is also relatively lightly used, much less complicated
>>     and doesn't need much extra in infrastructure to support it.
>
>     Thanks for your insides. There a two main solutions for the
>     problem. Add patch support to gitsm so that you could use the git
>     submodule command and create a patch or generate a .inc file and
>     manipulate the SRCREVs. I assume you would prefer a .inc file.
>     What do you think is the downside of a patch?
>
>
> It is much easier to get a complete view of the file with a drop-in, 
> versus a patch (depending on the size of the file). You need to know 
> the base directory, the depth, put in an upstream-status, create it, 
> copy it to your layer, etc. With a drop-in lock file, I copy it out, 
> edit it, and add it to my SRC_URI.  Not much difference in the end, 
> but I prefer the drop-in approach on pretty much any configuration 
> file in my builds, not just this example.

Is a drop-in lock file okay for you or do you require a support to 
generate .inc files?

>
>>
>>         Whether we have hard requirements and introduce a git
>>         submodule support which satisfy the requirements or we accept
>>         the advantages of a simple user interface and minimize the
>>         disadvantages.
>>
>>     Unfortunately in my experience the simple interfaces hiding
>>     complexity don't help when things go wrong. That's how I ended up
>>     where I am with my go recipes, and why I ended up tearing my
>>     gitsm recipe back into its components. There was no way to
>>     influence / fix the build otherwise, and they didn't support
>>     bleeding edge development very well.
>     Do you have a good example for a problematic go recipe to test my
>     approach?
>
>
> Not right now, the current state is relatively stable as I'm working 
> towards the LTS release. These have just popped up repeatedly over the 
> maybe 5+ years (I can't remember how long it has been!) in maintaining 
> meta-virtualization. I have no doubts (and am not implying) that your 
> series could adapt my recipes and use all of the go mod infrastructure 
> .. just with all of the vendoring and go mod efforts over the years, 
> going all the way back to the actual source code gave a lot more 
> visibility into the vendor dependencies (and not just what was 
> released for them) and I've used it many times while debugging runtime 
> issues of the container stacks.
>
>
>>     I'm definitely one of the people Richard is mentioning as a
>>     stakeholder, and one that could likely just ignore all of this ..
>>     but I'm attempting to wade into it again.
>
>     I am very grateful for that.
>
>>
>>     None of us have the hands on, daily experience with the
>>     components at play as you do right now, so patience on your part
>>     will be needed as we ask many not-so-intelligent questions.
>
>     That's no problem.
>
>>
>>         It doesn't matter if we run the resolve function inside a
>>         resolve, fetch or update task. The questions is do we want to
>>         support dynamic SRC_URIs or do we want an manual update task.
>>         The task needs to be manual run after a SRC_URI change and
>>         can produces a lot of noise in the update commit. In any case
>>         the manual editing of the SRC_URI isn't practical and the
>>         users will use the package manager to update dependencies and
>>         its recursive dependencies.
>>
>>     I don't understand the series quite enough yet to say "why can't
>>     we do both", if there was a way to abstract / componentize what
>>     is generating those dynamic SCR_URIS in such a way that an
>>     external tool or update task could generate them, and if they
>>     were already in place the dynamic generation wouldn't run at
>>     build time, that should keep both modes working.
>     If it is desired I can add both variants.
>
>
> Forcing one approach over the other isn't really going to make it 
> mergeable (or maybe I should say make it adopted by all).
>
> We need to help developers as well as people just doing "load build" 
> for distros (that should be in a steady state).  I'm under no illusion 
> that the way I handle the go recipes won't work for everyone either, 
> so I definitely wouldn't propose it as such.
The disadvantage of two approach is that you double the issues. It is 
much simpler if we always depend on the lock file and only support 
embedded and drop-in lock files. I can optimize the generated SRC_URIs 
but wouldn't support the manipulation of the SRC_URI because of it 
unforeseeable consequences. We didn't know which commands of the package 
manager depends on the lock file and I would avoid the regeneration of 
the lock file.

>>     I admit to not understanding why we'd be overly concerned about
>>     noise in the commits (for the dependencies) if they are split
>>     into separate files in the recipe. More information is always
>>     better when I'm dealing with the updates. I just scroll past it
>>     if I'm not interested and filter it if I am.
>     The problem is to identify the relevant parts. Lets say you update
>     a dependency because of an security issue. Afterwards you update
>     the project with a lot of dependency changes. You have to review
>     the complete noise to determine if your updated dependency doesn't
>     go backward in its version. It is much easier to use a patch.
>     After the project update the patch will fail or not. If it fails
>     you have a direct focus on the affected dependency. If you back
>     port the patch from the project you could simple drop it with the
>     next update.
>
>
> I prefer all of the extra information, but maybe that's my kernel 
> background. It's the same reason why all my recipe / version updates 
> contain all the short logs between the releases. I can just skip / 
> ignore the information most of the time, but it comes in handy when 
> looking for a security issue, etc.
Maybe we could add the dynamic dependencies to the buildhistory.

> In my go recipes, I have a look at the dependency SRCREVS between 
> updates. In particular if I'm debugging a runtime issue, that helps me 
> quickly see if a dependency changed and by how much it changed.
>
> I could do the same with a drop-in lockfile that was in my recipe, 
> since I'd have the delta readily available and could see the source 
> revisions, etc.
>
> Of course if a drop-in file was used, we'd want some sort of hash for 
> the original file it was clobbering, since that indicates an update 
> would be required (or dropping it, etc).
>
>
>
>>     I feel the pain (and your pain) of this after supporting
>>     complicated go/mixed language recipes through multiple major
>>     releases (and through go's changing dependency model + bleeding
>>     edge code, etc) and needing to track what has changed, so I
>>     definitely encourage you to keep working on this.
>>
>>         As a compromise we could add a new feature to generate .inc
>>         cache files before the main bitbake run. This would eliminate
>>         the manual update run and the commit noise as well as special
>>         fetch, unpack and patch task.
>>
>>     Can you elaborate on what you mean by before the main bitbake run
>>     ? Would it be still under a single bitbake invokation or would it
>>     be multiple runs (I support multiple runs, so don't take that as
>>     a leading question).
>
>     I can't answer this questions and need Richard guidance to
>     implement such a feature. I would assume that bitbake already
>     track file changes and can update its state. The behavior should
>     be similar to a change in the .inc file. Bitbake will detect that
>     a "include_cache" file is missing and run an update_cache task on
>     the recipe. Afterwards bitbake detect a file change on the
>     "include_cache" file and parse it. We need a possibility to mark
>     patches which shouldn't be applied if the "include_cache" file is
>     missing because the dependencies are missing. We need to run the
>     fetch, unpack and patch task before the update_cache task to
>     generate the .inc file.
>
> Aha. Maybe Richard will comment later. I was thinking more about 
> something that was in two distinct phases, but with some more thinking 
> and explanation, maybe this is workable as well.

This is only needed if we need the dynamic SRC_URIs outside of bitbake 
in the common bitbake format (SRC_URI += "..."). Otherwise we could save 
the information inside the WORKDIR and depend on the sstate cache.
Bruce Ashfield Feb. 13, 2025, 5:07 p.m. UTC | #10
On Thu, Feb 13, 2025 at 7:45 AM Stefan Herbrechtsmeier <
stefan.herbrechtsmeier-oss@weidmueller.com> wrote:

> Am 12.02.2025 um 18:45 schrieb Bruce Ashfield:
>
>
>
> On Wed, Feb 12, 2025 at 12:24 PM Stefan Herbrechtsmeier <
> stefan.herbrechtsmeier-oss@weidmueller.com> wrote:
>
>> Am 12.02.2025 um 16:07 schrieb Bruce Ashfield:
>>
>> On Wed, Feb 12, 2025 at 9:36 AM Stefan Herbrechtsmeier via
>> lists.openembedded.org <stefan.herbrechtsmeier-oss=
>> weidmueller.com@lists.openembedded.org> wrote:
>>
>>> Am 11.02.2025 um 22:46 schrieb Richard Purdie:
>>>
>>> On Tue, 2025-02-11 at 16:00 +0100, Stefan Herbrechtsmeier via lists.openembedded.org wrote:
>>>
>>> From: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com> <stefan.herbrechtsmeier@weidmueller.com>
>>>
>>> Signed-off-by: Stefan Herbrechtsmeier <stefan.herbrechtsmeier@weidmueller.com> <stefan.herbrechtsmeier@weidmueller.com>
>>> ---
>>>
>>>  .../python/python3-bcrypt-crates.inc          | 84 -------------------
>>>  .../python/python3-bcrypt_4.2.1.bb            |  4 +-
>>>  2 files changed, 1 insertion(+), 87 deletions(-)
>>>  delete mode 100644 meta/recipes-devtools/python/python3-bcrypt-crates.inc
>>>
>>> So let me as the silly question. This removes the crates.inc file and
>>> doesn't appear to add any kind of new list of locked down modules.
>>>
>>> The list is generated on the fly like gitsm and doesn't require an extra
>>> step.
>>>
>>> This means that inspection tools just using the metadata can't see
>>> "into" this recipe any longer for component information.
>>>
>>> We support and use python code inside the variables and thereby need a
>>> preprocessing of the metadata in any case.
>>>
>>> What do you mean by "component information"?
>>>
>>> This was
>>> something that some people felt strongly that was a necessary part of
>>> recipe metadata, for license, security and other manifest activities.
>>>
>>> Why can't they use the SBOM for this?
>>>
>>> Are we basically saying that information is now only available after
>>> the build takes place?
>>>
>>> They are only available after a special task run.
>>>
>>> I'm very worried that the previous discussions didn't reach a
>>> conclusion and this is moving the "magic" out of bitbake and into some
>>> vendor classes without addressing the concerns previously raised about
>>> transparency into the manifests of what is going on behind the scenes.
>>>
>>> I try to address the concerns but don't realize that the missing
>>> information in the recipe is a blocker.
>>>
>>> This version gives the user the possibility to influence the
>>> dependencies via patches or alternative lock file. It creates a vendor
>>> folder for easy patch and debug. It integrates the dependencies into the
>>> SBOM for security tracking.
>>>
>>> I skipped the license topic for now because the package managers don't
>>> handle license integrity. We have to keep the information in the recipe but
>>> hopefully the license information doesn't change with each update.
>>>
>>> I don't understand the requirement for the plain inspection. In my
>>> opinion external tools should always use a defined output and shouldn't
>>> depend on the project internal details. I adapt the existing users of the
>>> SRC_URI to include the dynamic SRC_URIs.
>>>
>>> I appreciate some of the requirements are conflicting.
>>>
>>> For the record in some recent meetings, I was promised that help would
>>> be forthcoming in helping guide this discussion. I therefore left
>>> things alone in the hope that would happen. It simply hasn't, probably
>>> due to time/work issues, which I can sympathise with but it does mean
>>> I'm left doing a bad job of trying to respond to your patches whilst
>>> trying to do too many other things badly too. That leaves us both very
>>> frustrated.
>>>
>>> I really want to see you succeed in reworking this and I appreciate the
>>> time and effort put into the patches. To make this successful, I know
>>> there are key stakeholders who need to buy into it and right now,
>>> they're more likely just to keep doing their own things as it is easier
>>> since this isn't going the direction they want. A key piece of making
>>> this successful is negotiating something which can work for a
>>> significant portion of them. I'm spelling all this out since I do at
>>> least want to make the situation clear.
>>>
>>> Yes, I'm very upset the OE community is putting me in this position
>>> despite me repeatedly asking for help and that isn't your fault, which
>>> just frustrates me more.
>>>
>>> My problem is the double standards. We support a fetcher which dynamic
>>> resolve dependencies and without manual update step since years. Nobody
>>> suggests to make the gitsm fetcher obsolete and requests the users to run
>>> an update task after a SRC_URI change to create a .inc file with the
>>> SRC_URIs of all the recursive submodules. Nobody complains about the
>>> missing components in the recipe.
>>>
>> There's no double standard, I'd simply say that design decisions of the
>> past doesn't mean that there aren't better ways to do something new.
>>
>> Richard went out of his way to explain the status and what sort of review
>> needs to happen, I'll add that while getting frustrated with it is natural,
>> pushing back on people doing reviews isn't going to help get things merged,
>> it will do the opposite.
>>
>> There have been plenty of complaints and issues with the gitsm fetcher,
>> but the reality is that if someone wants to get at the base components of
>> what it is doing, they can do so. I've had to take several of my maintained
>> recipes out of gitsm and back to the base git fetches. The submodules were
>> simply fetching code that didn't build and there was no way to fetch it.
>> The gitsm fetcher is also relatively lightly used, much less complicated
>> and doesn't need much extra in infrastructure to support it.
>>
>> Thanks for your insides. There a two main solutions for the problem. Add
>> patch support to gitsm so that you could use the git submodule command and
>> create a patch or generate a .inc file and manipulate the SRCREVs. I assume
>> you would prefer a .inc file. What do you think is the downside of a patch?
>>
>
> It is much easier to get a complete view of the file with a drop-in,
> versus a patch (depending on the size of the file). You need to know the
> base directory, the depth, put in an upstream-status, create it, copy it to
> your layer, etc. With a drop-in lock file, I copy it out, edit it, and add
> it to my SRC_URI.  Not much difference in the end, but I prefer the drop-in
> approach on pretty much any configuration file in my builds, not just this
> example.
>
> Is a drop-in lock file okay for you or do you require a support to
> generate .inc files?
>
If the drop-in lock file is a one by one listing of dependencies that can
be mapped to fetches, then it serves the same purpose as the .inc file, so
that would be fine with me. But I'd have to see what the implementation
looked like to be sure!



>
>
>>
>>> Whether we have hard requirements and introduce a git submodule support
>>> which satisfy the requirements or we accept the advantages of a simple user
>>> interface and minimize the disadvantages.
>>>
>> Unfortunately in my experience the simple interfaces hiding complexity
>> don't help when things go wrong. That's how I ended up where I am with my
>> go recipes, and why I ended up tearing my gitsm recipe back into its
>> components. There was no way to influence / fix the build otherwise, and
>> they didn't support bleeding edge development very well.
>>
>> Do you have a good example for a problematic go recipe to test my
>> approach?
>>
>
> Not right now, the current state is relatively stable as I'm working
> towards the LTS release. These have just popped up repeatedly over the
> maybe 5+ years (I can't remember how long it has been!) in maintaining
> meta-virtualization. I have no doubts (and am not implying) that your
> series could adapt my recipes and use all of the go mod infrastructure ..
> just with all of the vendoring and go mod efforts over the years, going all
> the way back to the actual source code gave a lot more visibility into the
> vendor dependencies (and not just what was released for them) and I've used
> it many times while debugging runtime issues of the container stacks.
>
>
>> I'm definitely one of the people Richard is mentioning as a stakeholder,
>> and one that could likely just ignore all of this .. but I'm attempting to
>> wade into it again.
>>
>> I am very grateful for that.
>>
>>
>> None of us have the hands on, daily experience with the components at
>> play as you do right now, so patience on your part will be needed as we ask
>> many not-so-intelligent questions.
>>
>> That's no problem.
>>
>>
>>> It doesn't matter if we run the resolve function inside a resolve, fetch
>>> or update task. The questions is do we want to support dynamic SRC_URIs or
>>> do we want an manual update task. The task needs to be manual run after a
>>> SRC_URI change and can produces a lot of noise in the update commit. In any
>>> case the manual editing of the SRC_URI isn't practical and the users will
>>> use the package manager to update dependencies and its recursive
>>> dependencies.
>>>
>> I don't understand the series quite enough yet to say "why can't we do
>> both", if there was a way to abstract / componentize what is generating
>> those dynamic SCR_URIS in such a way that an external tool or update task
>> could generate them, and if they were already in place the dynamic
>> generation wouldn't run at build time, that should keep both modes working.
>>
>> If it is desired I can add both variants.
>>
>
> Forcing one approach over the other isn't really going to make it
> mergeable (or maybe I should say make it adopted by all).
>
> We need to help developers as well as people just doing "load build" for
> distros (that should be in a steady state).  I'm under no illusion that the
> way I handle the go recipes won't work for everyone either, so I definitely
> wouldn't propose it as such.
>
> The disadvantage of two approach is that you double the issues. It is much
> simpler if we always depend on the lock file and only support embedded and
> drop-in lock files. I can optimize the generated SRC_URIs but wouldn't
> support the manipulation of the SRC_URI because of it unforeseeable
> consequences. We didn't know which commands of the package manager depends
> on the lock file and I would avoid the regeneration of the lock file.
>

Agreed that we need something maintainable, but we can't just forget about
the developer use case. We need to be able to support iterative, developer
workflows. This is something that we've always tried to do with the kernel
workflows, they aren't perfect, but those modes of working are considered.



>
> I admit to not understanding why we'd be overly concerned about noise in
>> the commits (for the dependencies) if they are split into separate files in
>> the recipe. More information is always better when I'm dealing with the
>> updates. I just scroll past it if I'm not interested and filter it if I am.
>>
>> The problem is to identify the relevant parts. Lets say you update a
>> dependency because of an security issue. Afterwards you update the project
>> with a lot of dependency changes. You have to review the complete noise to
>> determine if your updated dependency doesn't go backward in its version. It
>> is much easier to use a patch. After the project update the patch will fail
>> or not. If it fails you have a direct focus on the affected dependency. If
>> you back port the patch from the project you could simple drop it with the
>> next update.
>>
>
> I prefer all of the extra information, but maybe that's my kernel
> background. It's the same reason why all my recipe / version updates
> contain all the short logs between the releases. I can just skip / ignore
> the information most of the time, but it comes in handy when looking for a
> security issue, etc.
>
> Maybe we could add the dynamic dependencies to the buildhistory.
>

That would make buildhistory a requirement, which is something (as an
example) that I don't use. I'm just talking about something that goes along
with the git history of the recipe updates. All the information in one
place. I wouldn't get to hung up on this right now, as it would be simple
enough to commit changes to lock files, etc, with generated long logs that
contained all the information. That is outside of what this series needs to
consider.



>
> In my go recipes, I have a look at the dependency SRCREVS between updates.
> In particular if I'm debugging a runtime issue, that helps me quickly see
> if a dependency changed and by how much it changed.
>
> I could do the same with a drop-in lockfile that was in my recipe, since
> I'd have the delta readily available and could see the source revisions,
> etc.
>
> Of course if a drop-in file was used, we'd want some sort of hash for the
> original file it was clobbering, since that indicates an update would be
> required (or dropping it, etc).
>
>
>>
>> I feel the pain (and your pain) of this after supporting complicated
>> go/mixed language recipes through multiple major releases (and through go's
>> changing dependency model + bleeding edge code, etc) and needing to track
>> what has changed, so I definitely encourage you to keep working on this.
>>
>> As a compromise we could add a new feature to generate .inc cache files
>>> before the main bitbake run. This would eliminate the manual update run and
>>> the commit noise as well as special fetch, unpack and patch task.
>>>
>>> Can you elaborate on what you mean by before the main bitbake run ?
>> Would it be still under a single bitbake invokation or would it be multiple
>> runs (I support multiple runs, so don't take that as a leading question).
>>
>> I can't answer this questions and need Richard guidance to implement such
>> a feature. I would assume that bitbake already track file changes and can
>> update its state. The behavior should be similar to a change in the .inc
>> file. Bitbake will detect that a "include_cache" file is missing and run an
>> update_cache task on the recipe. Afterwards bitbake detect a file change on
>> the "include_cache" file and parse it. We need a possibility to mark
>> patches which shouldn't be applied if the "include_cache" file is missing
>> because the dependencies are missing. We need to run the fetch, unpack and
>> patch task before the update_cache task to generate the .inc file.
>>
> Aha. Maybe Richard will comment later. I was thinking more about something
> that was in two distinct phases, but with some more thinking and
> explanation, maybe this is workable as well.
>
> This is only needed if we need the dynamic SRC_URIs outside of bitbake in
> the common bitbake format (SRC_URI += "..."). Otherwise we could save the
> information inside the WORKDIR and depend on the sstate cache.
>

I think we are describing different things again. I'm just looking for the
ability to run some of the tasks separately and do it in two distinct
steps. So that wouldn't involve WORKDIR or sstate, but would just generate
the lock or .inc file, or whatever. And then have the build use that file
later.

Bruce
diff mbox series

Patch

diff --git a/meta/recipes-devtools/python/python3-bcrypt-crates.inc b/meta/recipes-devtools/python/python3-bcrypt-crates.inc
deleted file mode 100644
index 576abcd7cb..0000000000
--- a/meta/recipes-devtools/python/python3-bcrypt-crates.inc
+++ /dev/null
@@ -1,84 +0,0 @@ 
-# Autogenerated with 'bitbake -c update_crates python3-bcrypt'
-
-# from src/_bcrypt/Cargo.lock
-SRC_URI += " \
-    crate://crates.io/autocfg/1.4.0 \
-    crate://crates.io/base64/0.22.1 \
-    crate://crates.io/bcrypt/0.16.0 \
-    crate://crates.io/bcrypt-pbkdf/0.10.0 \
-    crate://crates.io/block-buffer/0.10.4 \
-    crate://crates.io/blowfish/0.9.1 \
-    crate://crates.io/byteorder/1.5.0 \
-    crate://crates.io/cfg-if/1.0.0 \
-    crate://crates.io/cipher/0.4.4 \
-    crate://crates.io/cpufeatures/0.2.15 \
-    crate://crates.io/crypto-common/0.1.6 \
-    crate://crates.io/digest/0.10.7 \
-    crate://crates.io/generic-array/0.14.7 \
-    crate://crates.io/getrandom/0.2.15 \
-    crate://crates.io/heck/0.5.0 \
-    crate://crates.io/indoc/2.0.5 \
-    crate://crates.io/inout/0.1.3 \
-    crate://crates.io/libc/0.2.164 \
-    crate://crates.io/memoffset/0.9.1 \
-    crate://crates.io/once_cell/1.20.2 \
-    crate://crates.io/pbkdf2/0.12.2 \
-    crate://crates.io/portable-atomic/1.9.0 \
-    crate://crates.io/proc-macro2/1.0.89 \
-    crate://crates.io/pyo3/0.23.1 \
-    crate://crates.io/pyo3-build-config/0.23.1 \
-    crate://crates.io/pyo3-ffi/0.23.1 \
-    crate://crates.io/pyo3-macros/0.23.1 \
-    crate://crates.io/pyo3-macros-backend/0.23.1 \
-    crate://crates.io/quote/1.0.37 \
-    crate://crates.io/sha2/0.10.8 \
-    crate://crates.io/subtle/2.6.1 \
-    crate://crates.io/syn/2.0.87 \
-    crate://crates.io/target-lexicon/0.12.16 \
-    crate://crates.io/typenum/1.17.0 \
-    crate://crates.io/unicode-ident/1.0.13 \
-    crate://crates.io/unindent/0.2.3 \
-    crate://crates.io/version_check/0.9.5 \
-    crate://crates.io/wasi/0.11.0+wasi-snapshot-preview1 \
-    crate://crates.io/zeroize/1.8.1 \
-"
-
-SRC_URI[autocfg-1.4.0.sha256sum] = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
-SRC_URI[base64-0.22.1.sha256sum] = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
-SRC_URI[bcrypt-0.16.0.sha256sum] = "2b1866ecef4f2d06a0bb77880015fdf2b89e25a1c2e5addacb87e459c86dc67e"
-SRC_URI[bcrypt-pbkdf-0.10.0.sha256sum] = "6aeac2e1fe888769f34f05ac343bbef98b14d1ffb292ab69d4608b3abc86f2a2"
-SRC_URI[block-buffer-0.10.4.sha256sum] = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71"
-SRC_URI[blowfish-0.9.1.sha256sum] = "e412e2cd0f2b2d93e02543ceae7917b3c70331573df19ee046bcbc35e45e87d7"
-SRC_URI[byteorder-1.5.0.sha256sum] = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b"
-SRC_URI[cfg-if-1.0.0.sha256sum] = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
-SRC_URI[cipher-0.4.4.sha256sum] = "773f3b9af64447d2ce9850330c473515014aa235e6a783b02db81ff39e4a3dad"
-SRC_URI[cpufeatures-0.2.15.sha256sum] = "0ca741a962e1b0bff6d724a1a0958b686406e853bb14061f218562e1896f95e6"
-SRC_URI[crypto-common-0.1.6.sha256sum] = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3"
-SRC_URI[digest-0.10.7.sha256sum] = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
-SRC_URI[generic-array-0.14.7.sha256sum] = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
-SRC_URI[getrandom-0.2.15.sha256sum] = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7"
-SRC_URI[heck-0.5.0.sha256sum] = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
-SRC_URI[indoc-2.0.5.sha256sum] = "b248f5224d1d606005e02c97f5aa4e88eeb230488bcc03bc9ca4d7991399f2b5"
-SRC_URI[inout-0.1.3.sha256sum] = "a0c10553d664a4d0bcff9f4215d0aac67a639cc68ef660840afe309b807bc9f5"
-SRC_URI[libc-0.2.164.sha256sum] = "433bfe06b8c75da9b2e3fbea6e5329ff87748f0b144ef75306e674c3f6f7c13f"
-SRC_URI[memoffset-0.9.1.sha256sum] = "488016bfae457b036d996092f6cb448677611ce4449e970ceaf42695203f218a"
-SRC_URI[once_cell-1.20.2.sha256sum] = "1261fe7e33c73b354eab43b1273a57c8f967d0391e80353e51f764ac02cf6775"
-SRC_URI[pbkdf2-0.12.2.sha256sum] = "f8ed6a7761f76e3b9f92dfb0a60a6a6477c61024b775147ff0973a02653abaf2"
-SRC_URI[portable-atomic-1.9.0.sha256sum] = "cc9c68a3f6da06753e9335d63e27f6b9754dd1920d941135b7ea8224f141adb2"
-SRC_URI[proc-macro2-1.0.89.sha256sum] = "f139b0662de085916d1fb67d2b4169d1addddda1919e696f3252b740b629986e"
-SRC_URI[pyo3-0.23.1.sha256sum] = "7ebb0c0cc0de9678e53be9ccf8a2ab53045e6e3a8be03393ceccc5e7396ccb40"
-SRC_URI[pyo3-build-config-0.23.1.sha256sum] = "80e3ce69c4ec34476534b490e412b871ba03a82e35604c3dfb95fcb6bfb60c09"
-SRC_URI[pyo3-ffi-0.23.1.sha256sum] = "3b09f311c76b36dfd6dd6f7fa6f9f18e7e46a1c937110d283e80b12ba2468a75"
-SRC_URI[pyo3-macros-0.23.1.sha256sum] = "fd4f74086536d1e1deaff99ec0387481fb3325c82e4e48be0e75ab3d3fcb487a"
-SRC_URI[pyo3-macros-backend-0.23.1.sha256sum] = "9e77dfeb76b32bbf069144a5ea0a36176ab59c8db9ce28732d0f06f096bbfbc8"
-SRC_URI[quote-1.0.37.sha256sum] = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af"
-SRC_URI[sha2-0.10.8.sha256sum] = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8"
-SRC_URI[subtle-2.6.1.sha256sum] = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
-SRC_URI[syn-2.0.87.sha256sum] = "25aa4ce346d03a6dcd68dd8b4010bcb74e54e62c90c573f394c46eae99aba32d"
-SRC_URI[target-lexicon-0.12.16.sha256sum] = "61c41af27dd6d1e27b1b16b489db798443478cef1f06a660c96db617ba5de3b1"
-SRC_URI[typenum-1.17.0.sha256sum] = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
-SRC_URI[unicode-ident-1.0.13.sha256sum] = "e91b56cd4cadaeb79bbf1a5645f6b4f8dc5bde8834ad5894a8db35fda9efa1fe"
-SRC_URI[unindent-0.2.3.sha256sum] = "c7de7d73e1754487cb58364ee906a499937a0dfabd86bcb980fa99ec8c8fa2ce"
-SRC_URI[version_check-0.9.5.sha256sum] = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a"
-SRC_URI[wasi-0.11.0+wasi-snapshot-preview1.sha256sum] = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
-SRC_URI[zeroize-1.8.1.sha256sum] = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde"
diff --git a/meta/recipes-devtools/python/python3-bcrypt_4.2.1.bb b/meta/recipes-devtools/python/python3-bcrypt_4.2.1.bb
index 004e8ce8b1..9117637744 100644
--- a/meta/recipes-devtools/python/python3-bcrypt_4.2.1.bb
+++ b/meta/recipes-devtools/python/python3-bcrypt_4.2.1.bb
@@ -8,12 +8,10 @@  LDFLAGS += "${@bb.utils.contains('DISTRO_FEATURES', 'ptest', '-fuse-ld=bfd', '',
 
 SRC_URI[sha256sum] = "6765386e3ab87f569b276988742039baab087b2cdb01e809d74e74503c2faafe"
 
-inherit pypi python_setuptools3_rust cargo-update-recipe-crates ptest-python-pytest
+inherit pypi python_setuptools3_rust ptest-python-pytest vendor_cargo
 
 CARGO_SRC_DIR = "src/_bcrypt"
 
-require ${BPN}-crates.inc
-
 RDEPENDS:${PN}:class-target += "\
     python3-cffi \
     python3-ctypes \