From patchwork Tue Oct 14 22:44:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72324 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9325CCCD197 for ; Tue, 14 Oct 2025 22:45:07 +0000 (UTC) Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by mx.groups.io with SMTP id smtpd.web10.2503.1760481899612661693 for ; Tue, 14 Oct 2025 15:44:59 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=mTb+NgUI; spf=softfail (domain: sakoman.com, ip: 209.85.214.180, mailfrom: steve@sakoman.com) Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-27d4d6b7ab5so79707455ad.2 for ; Tue, 14 Oct 2025 15:44:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481899; x=1761086699; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=OP+xYJXLbsvcloCWggit6t8xL6ES2uqLGl3ayYU8z74=; b=mTb+NgUIeD1hB8q0NElN+zzWLV/F619VE1t2d3jAlVv/wBNNKEFlKxyoNLZNGzc282 /8vbMMYCB+XaRVOw2c039jcsKqpMbGu66Eije9joRDQ895Y2zCJaB8XhLNqg6E2W6RGl UdThPU4eYKkvgiYbGTdukrxFatANVd++HG4kZIrNlvGN64Jhpo9iCeWKs2pfNtva0IbX iZY7BVl9EYfXsgXN01t6z1o62gOP7EGZhvM1H99NU3aC8KEc4KqVgH+5y8RtF9rUNlAr ynmvOSCcoIHML5QzTBBPI3j6IfHee299L7lYTj0KhTMei+KzYFIkkkLnLXyBuGQBraC4 AR8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481899; x=1761086699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OP+xYJXLbsvcloCWggit6t8xL6ES2uqLGl3ayYU8z74=; b=a7SCKKeeNDgf7UUReTGJX1MngWEW616qA1BeNDNnpvvF7VMK7scpmbIwAHLkzsV91C sYbyfqiuQ5MQnmRLnlmCB35fhkH0L1D3fMtjglt4FViWQo+F4FzYIF2aaeVCspj2hnlx o1PY34lBkY9Tm1Hy21UZW5NyKzEG1d04gqlYyt6ntbgszoZsMtWMQkV+5A46+tUrL9XQ YM8oFimde69Wg9PCMVr2ulfkPYLWRIjuv/Tk0OPxSceGE953IPzRO2SzvhJou5wNOHBV BYbH1b5AgF8S6ruTUzKGRkHAjy6L6rt8PkDwMy4lizFc6Auy9JCL7wltqGE2Khc3KUx3 0g1w== X-Gm-Message-State: AOJu0YzBrwJ57YbSO+3ymw2xOTrDJiHCZFS69e4p3WRgnuC1ifGZka94 x1ygrMNjXezFdqTx6Am+Svn1bqXfV0TjQbDpL/8RWDQALs/KR7ZWwqFyRjNEvFZIoltCsgSBhlt tJYKs X-Gm-Gg: ASbGncs/ITQdrrWgcSt6Qz8ERge9iFm0jX4P2OtpRDxZ3iD4EO6cF4/3CirfIgac21i i/xEiTrOx/jfyQ75qcrsk8B/eaepwYTf/4nGxw3FhSfj72gYaN30HKp9U7/vWLuZ0+jQdPRsTsR hUYoV5qeLRYu3vxCTQeLLUTwH+aWcYEe3WE+CbcLJ8JIU2y0QT5P27YFZaCC1aTz5YX7sAEzCab BHv1/AS7HKcYGgHvSbVFqYiFXpZrlBsasR9vCAm0ypNkpqFfKsxNZweFJ7OGJra6Cl4kB+tX8KV hdLsDy5/J3WnCNVVnFZJXKAOnBNVfxRBLDQBnzRMPBBvGWUaPVtzO5l+mzRJWk3eQCXczP1L1bV QIhbFkvW+QSjy4CgLFQ8WEKNUpuvmTHr1 X-Google-Smtp-Source: AGHT+IEaSk2bVNbUetFrvvCAQYtDKBRpLdEPNxhQke5WQiYX5+8Y9E1zn9YmNjVm5Ii3PYLAKZ+AjQ== X-Received: by 2002:a17:903:298e:b0:27e:eea6:dffc with SMTP id d9443c01a7336-290272e3120mr323524635ad.41.1760481898669; Tue, 14 Oct 2025 15:44:58 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.44.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:44:58 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 01/14] qemu: patch CVE-2024-8354 Date: Tue, 14 Oct 2025 15:44:38 -0700 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:07 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224855 From: Peter Marko Pick commit per [1]. Signed-off-by: Peter Marko Signed-off-by: Steve Sakoman --- meta/recipes-devtools/qemu/qemu.inc | 1 + .../qemu/qemu/CVE-2024-8354.patch | 75 +++++++++++++++++++ 2 files changed, 76 insertions(+) create mode 100644 meta/recipes-devtools/qemu/qemu/CVE-2024-8354.patch diff --git a/meta/recipes-devtools/qemu/qemu.inc b/meta/recipes-devtools/qemu/qemu.inc index 1298514cd7..fd1a8647df 100644 --- a/meta/recipes-devtools/qemu/qemu.inc +++ b/meta/recipes-devtools/qemu/qemu.inc @@ -128,6 +128,7 @@ SRC_URI = "https://download.qemu.org/${BPN}-${PV}.tar.xz \ file://CVE-2024-3446-0005.patch \ file://CVE-2024-3446-0006.patch \ file://CVE-2024-3447.patch \ + file://CVE-2024-8354.patch \ " UPSTREAM_CHECK_REGEX = "qemu-(?P\d+(\.\d+)+)\.tar" diff --git a/meta/recipes-devtools/qemu/qemu/CVE-2024-8354.patch b/meta/recipes-devtools/qemu/qemu/CVE-2024-8354.patch new file mode 100644 index 0000000000..c469dcabb4 --- /dev/null +++ b/meta/recipes-devtools/qemu/qemu/CVE-2024-8354.patch @@ -0,0 +1,75 @@ +From 746269eaae16423572ae7c0dfeb66140fa882149 Mon Sep 17 00:00:00 2001 +From: Peter Maydell +Date: Mon, 15 Sep 2025 14:29:10 +0100 +Subject: [PATCH] hw/usb/hcd-uhci: don't assert for SETUP to non-0 endpoint + +If the guest feeds invalid data to the UHCI controller, we +can assert: +qemu-system-x86_64: ../../hw/usb/core.c:744: usb_ep_get: Assertion `pid == USB_TOKEN_IN || pid == USB_TOKEN_OUT' failed. + +(see issue 2548 for the repro case). This happens because the guest +attempts USB_TOKEN_SETUP to an endpoint other than 0, which is not +valid. The controller code doesn't catch this guest error, so +instead we hit the assertion in the USB core code. + +Catch the case of SETUP to non-zero endpoint, and treat it as a fatal +error in the TD, in the same way we do for an invalid PID value in +the TD. + +This is the UHCI equivalent of the same bug in OHCI that we fixed in +commit 3c3c233677 ("hw/usb/hcd-ohci: Fix #1510, #303: pid not IN or +OUT"). + +This bug has been tracked as CVE-2024-8354. + +Cc: qemu-stable@nongnu.org +Fixes: https://gitlab.com/qemu-project/qemu/-/issues/2548 +Signed-off-by: Peter Maydell +Reviewed-by: Michael Tokarev +(cherry picked from commit d0af3cd0274e265435170a583c72b9f0a4100dff) +Signed-off-by: Michael Tokarev + +CVE: CVE-2024-8354 +Upstream-Status: Backport [https://gitlab.com/qemu-project/qemu/-/commit/746269eaae16423572ae7c0dfeb66140fa882149] +Signed-off-by: Peter Marko +--- + hw/usb/hcd-uhci.c | 10 ++++++++-- + 1 file changed, 8 insertions(+), 2 deletions(-) + +diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c +index 0561a6d801..8f4d6a0f71 100644 +--- a/hw/usb/hcd-uhci.c ++++ b/hw/usb/hcd-uhci.c +@@ -724,6 +724,7 @@ static int uhci_handle_td(UHCIState *s, UHCIQueue *q, uint32_t qh_addr, + bool spd; + bool queuing = (q != NULL); + uint8_t pid = td->token & 0xff; ++ uint8_t ep_id = (td->token >> 15) & 0xf; + UHCIAsync *async; + + async = uhci_async_find_td(s, td_addr); +@@ -767,9 +768,14 @@ static int uhci_handle_td(UHCIState *s, UHCIQueue *q, uint32_t qh_addr, + + switch (pid) { + case USB_TOKEN_OUT: +- case USB_TOKEN_SETUP: + case USB_TOKEN_IN: + break; ++ case USB_TOKEN_SETUP: ++ /* SETUP is only valid to endpoint 0 */ ++ if (ep_id == 0) { ++ break; ++ } ++ /* fallthrough */ + default: + /* invalid pid : frame interrupted */ + s->status |= UHCI_STS_HCPERR; +@@ -816,7 +822,7 @@ static int uhci_handle_td(UHCIState *s, UHCIQueue *q, uint32_t qh_addr, + return uhci_handle_td_error(s, td, td_addr, USB_RET_NODEV, + int_mask); + } +- ep = usb_ep_get(dev, pid, (td->token >> 15) & 0xf); ++ ep = usb_ep_get(dev, pid, ep_id); + q = uhci_queue_new(s, qh_addr, td, ep); + } + async = uhci_async_alloc(q, td_addr); From patchwork Tue Oct 14 22:44:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 915A5CCD195 for ; Tue, 14 Oct 2025 22:45:07 +0000 (UTC) Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by mx.groups.io with SMTP id smtpd.web10.2505.1760481900812602546 for ; Tue, 14 Oct 2025 15:45:00 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=D1Pt8lhj; spf=softfail (domain: sakoman.com, ip: 209.85.215.177, mailfrom: steve@sakoman.com) Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-b609a32a9b6so3467251a12.2 for ; Tue, 14 Oct 2025 15:45:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481900; x=1761086700; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=venjp1TrO88LKkuVPAC55M6sN+lToPRnfzZ22/Z/IoI=; b=D1Pt8lhjLSRGnanoe4BjU2Rgorbgufg6I4bhl0hnCN5IVIYu7KfWwb52TCL5XYJ8Zs kKOJ6+JVfhSWQnX6/2oCsj/88u5nHRECx3Gc3eLYfnVg48t8uGSXYLiQ82SXn5tKnoGk wSaxtDOU5gCe/HurTVD2YIin3uqm7ZIGF5n4P1LbURfCT8eAtCfhafwmOIdUYT35gy/Q 9F4rrOcBnMLaP90dl8HNyKesgIVEASaHwLDdHV5txZlPISt9F8Ccc7ES6Et+dsk8sVk4 hpBKObZ1LaDPE8OUTjOBcChvi2NfVUIEAtt/IxFU5QJWKmUg6ct1TuNBXyzq/mjAGIxg WwXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481900; x=1761086700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=venjp1TrO88LKkuVPAC55M6sN+lToPRnfzZ22/Z/IoI=; b=SRkVlo07xzbG9GRrik8AjpyN3k7a5utYlCDE9KbVHcC2mhwIFiklUJZIRSNyeqTd6P GxuSzFX1fjcFxO/A0iMPBUUfRe2c4ccYD63IySoT8yFeDRhbRzrF63eKJL/i9fg8vwG1 z377QZwE192F9fh25AQwma9l+Fj3SoC67n4Chlq8lBkfGbYSK5doay3e6EmzZ5THV3Yd LSZ9tn6Wq+Cj5D2WhYfPmlzs/CFOU342NSYk4JrXGwAo/x2ga3VH+KiiCw8dBu//wV15 YAsWJ0caYugt/qo3Rv0dWGzxdcEyjIx3npkjgYxYQOR85nbNewBDkDjAIV9RKT4BciAu J9Lg== X-Gm-Message-State: AOJu0YxRHgz1vEuzL9jtn8RTiqfKiiJKEE/4Yg7wtJWhIdZ8laQrGzf7 Blk5sWlwcevt5urx+0hewA/ywGLop7tp2p01uCXIeEyxPqsMVAYVddDG8hqKADtr5rwtJBy4PYo mIj5i X-Gm-Gg: ASbGncuewH3kbuZNJacyDno0HtX1DYpi0ZPOZkGTe+ZU9XaEUujPb9Z/KTA5OSfuwV6 2ZsMoGDLH72g3Za0uN61W5C6rPm2OQ1wWwPIENQcOcAdMSZkr/JUMRSz38WfPxQpVG35W4NkjCT N88MK1kXlIFA11RRaKr9ZlhsYC03t8ENJm7vb0wVtmWXyruar7yGsKlA9ROjS2AUdPp8Xq0tYMA R2tdnOAXh4cekJ5GAN+X7koUVOlISaBWvGEgJhPeWtLCWkgJy5vkOb2M8xev516pxIw1zXtWoqW XwMfgXIpxlVhG4fQwem4i+g/ut5Cff0zwJP/JcVHtoW1eqG8cgC10FC61LfRjWEFhUeHvAo+YYW fh2UnOXU2jls3YVHx0+bnD+r6V/X7pEQ0 X-Google-Smtp-Source: AGHT+IHvjYanZ/RfaIdcKjrhxcj7a7m7gdoKKWkTSocQ6/DIOWtXXXZ/BnmLHGp62xYIkhbvVkikzg== X-Received: by 2002:a17:903:4b2f:b0:24c:cc32:788b with SMTP id d9443c01a7336-29027216103mr354288685ad.3.1760481900059; Tue, 14 Oct 2025 15:45:00 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.44.59 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:44:59 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 02/14] binutils: patch CVE-2025-11082 Date: Tue, 14 Oct 2025 15:44:39 -0700 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:07 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224856 From: Peter Marko Pick patch per link in NVD report. Signed-off-by: Peter Marko Signed-off-by: Mathieu Dubois-Briand Signed-off-by: Steve Sakoman --- .../binutils/binutils-2.38.inc | 1 + .../binutils/0044-CVE-2025-11082.patch | 46 +++++++++++++++++++ 2 files changed, 47 insertions(+) create mode 100644 meta/recipes-devtools/binutils/binutils/0044-CVE-2025-11082.patch diff --git a/meta/recipes-devtools/binutils/binutils-2.38.inc b/meta/recipes-devtools/binutils/binutils-2.38.inc index 527334ccec..0fd950e694 100644 --- a/meta/recipes-devtools/binutils/binutils-2.38.inc +++ b/meta/recipes-devtools/binutils/binutils-2.38.inc @@ -80,5 +80,6 @@ SRC_URI = "\ file://0042-CVE-2025-5245.patch \ file://0043-CVE-2025-7546.patch \ file://0043-CVE-2025-7545.patch \ + file://0044-CVE-2025-11082.patch \ " S = "${WORKDIR}/git" diff --git a/meta/recipes-devtools/binutils/binutils/0044-CVE-2025-11082.patch b/meta/recipes-devtools/binutils/binutils/0044-CVE-2025-11082.patch new file mode 100644 index 0000000000..83747d4e8b --- /dev/null +++ b/meta/recipes-devtools/binutils/binutils/0044-CVE-2025-11082.patch @@ -0,0 +1,46 @@ +From ea1a0737c7692737a644af0486b71e4a392cbca8 Mon Sep 17 00:00:00 2001 +From: "H.J. Lu" +Date: Mon, 22 Sep 2025 15:20:34 +0800 +Subject: [PATCH] elf: Don't read beyond .eh_frame section size + + PR ld/33464 + * elf-eh-frame.c (_bfd_elf_parse_eh_frame): Don't read beyond + .eh_frame section size. + +Signed-off-by: H.J. Lu + +CVE: CVE-2025-11082 +Upstream-Status: Backport [https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=ea1a0737c7692737a644af0486b71e4a392cbca8] +Signed-off-by: Peter Marko +--- + bfd/elf-eh-frame.c | 8 ++++++-- + 1 file changed, 6 insertions(+), 2 deletions(-) + +diff --git a/bfd/elf-eh-frame.c b/bfd/elf-eh-frame.c +index dc0d2e097f5..30bb313489c 100644 +--- a/bfd/elf-eh-frame.c ++++ b/bfd/elf-eh-frame.c +@@ -733,6 +733,7 @@ _bfd_elf_parse_eh_frame (bfd *abfd, struct bfd_link_info *info, + if (hdr_id == 0) + { + unsigned int initial_insn_length; ++ char *null_byte; + + /* CIE */ + this_inf->cie = 1; +@@ -749,10 +750,13 @@ _bfd_elf_parse_eh_frame (bfd *abfd, struct bfd_link_info *info, + REQUIRE (cie->version == 1 + || cie->version == 3 + || cie->version == 4); +- REQUIRE (strlen ((char *) buf) < sizeof (cie->augmentation)); ++ null_byte = memchr ((char *) buf, 0, end - buf); ++ REQUIRE (null_byte != NULL); ++ REQUIRE ((size_t) (null_byte - (char *) buf) ++ < sizeof (cie->augmentation)); + + strcpy (cie->augmentation, (char *) buf); +- buf = (bfd_byte *) strchr ((char *) buf, '\0') + 1; ++ buf = (bfd_byte *) null_byte + 1; + this_inf->u.cie.aug_str_len = buf - start - 1; + ENSURE_NO_RELOCS (buf); + if (buf[0] == 'e' && buf[1] == 'h') From patchwork Tue Oct 14 22:44:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EE63CCD184 for ; Tue, 14 Oct 2025 22:45:07 +0000 (UTC) Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by mx.groups.io with SMTP id smtpd.web11.2570.1760481902255393641 for ; Tue, 14 Oct 2025 15:45:02 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=oyMPuMbP; spf=softfail (domain: sakoman.com, ip: 209.85.214.180, mailfrom: steve@sakoman.com) Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-26c209802c0so58265655ad.0 for ; Tue, 14 Oct 2025 15:45:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481901; x=1761086701; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=i94l+Z0mdlVS95ZG+k9X9IIZwi05KN1TW+zwu8b9H/w=; b=oyMPuMbP/a45eRU7BVJcsWCJi+0AuKhsBykM5srfedzRHVkR+WfOfQIxkb4qxf/9Kn O0BTEf/o2WXLeavVAh60vf0fklu3djTAxWP2ACOJKiwEFPy/wu9r7C8TmqH1VbdXhw7d qRwOUfO3VjuXJTg0bQvMOej7PoewHFyxJmDPBSN0ScGD8vLXy2pg4C4YQ7JibvrWBm7h 2rBtkBReb3tHQtpk1plmkwG1SLmv8GqfE2zKDpfaXzxNUewXWttAdCvv7WXyDSBZ/TAc EUyycURdT9GHjUVYHaFhHNHwzg3QIMzkSId+hQfVGc3SdnEtbrkIybm4SBnWPQ4PGuls bZaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481901; x=1761086701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i94l+Z0mdlVS95ZG+k9X9IIZwi05KN1TW+zwu8b9H/w=; b=XeImU+hL0oQNFq/bNFg3m5DUVTzqZsBVaeuKHsWqrVQLt1XvyNUXojdZfgQKujb7br wtc+of2gG5x4HkjUrElLCIwhOa8IS3oDBwYRlTSCojS6jJLyrNspg2z9nht25lNNtCiK KcDn25lWWPQ7ZwrXf7wx6KEZPw5tkzlD55YrQEipTYE/l472asdJoJGogM8M0dOQi7Oh t/pidsAGYS/oLIMU4duL6Ge+FgvY4BiNIfWLCfQgULRxlNEwyW2ZHTrF56bQ99Rt70kC 9qJlyWMjQfcPqfaxsa0iW+AScI+AaDpOZr7Udb+9IVqHxWKAtp6x6qC6rE4aulLyBl4r lr/A== X-Gm-Message-State: AOJu0Yz3uhiVFn3to5CXyMPEUBzLq4yFZJEO+9F5n1kYpmNB9nMEipqF HVdTS+rL/TxQf3wf4lwHp3lpyclAEzsEaqS/WvN3vb6+4YVvS3UE6k03XrpCOHPYMEWUToCZsuW kbloc X-Gm-Gg: ASbGncs9b/ZEg0YSGNBVyTlsM0X5pqGo9k7B/KkKEAb66ws2SJmroeorDHqDpMDESdy Zsr+nuqHnvyu1gsIUZPne1SPGq4lhbXKdemwPw5QLmnaDICFjKeeHL03W7qiao3l+K3nPwPF6vD xhKQlFELU74Shhzzk7JroQ+snTg4H3I/wxT93mn59xeyl0Djzex2YTAhT0Mkfb2qAKRDcrXeWCn U5H2Kg8dQR5D+2SUYy/OOKswNzm/Ak/ewz5MNlJviKRRKkpyu2WWLuJEc4L4SamP41rmKyOu7Tl BedIDFa0U9iOaZUHEqPKd69FL47740LHRCyOu+5F2g9hPEEhWMO5HNHMoZ1Y6zgO66S4WMP2ZcL h1BTdG+rYkErJEE2uEvWwym8LPOLlMePLWmy9fnntoDA= X-Google-Smtp-Source: AGHT+IH+ehg5qDPIwthHOWf0ypvz5Q7J1IZOwALYXlPW57LrIoz48cdpQ2/FbmovRCuZ8NsqgHgm/Q== X-Received: by 2002:a17:902:f641:b0:27e:edd9:576e with SMTP id d9443c01a7336-290273ef199mr302361045ad.30.1760481901462; Tue, 14 Oct 2025 15:45:01 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:01 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 03/14] binutils: patch CVE-2025-11083 Date: Tue, 14 Oct 2025 15:44:40 -0700 Message-ID: <99879f41af7272e597c9a8c4c0260d1b690f9051.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:07 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224857 From: Peter Marko Pick patch per link in NVD report. Signed-off-by: Peter Marko Signed-off-by: Mathieu Dubois-Briand Signed-off-by: Steve Sakoman --- .../binutils/binutils-2.38.inc | 1 + .../binutils/0045-CVE-2025-11083.patch | 77 +++++++++++++++++++ 2 files changed, 78 insertions(+) create mode 100644 meta/recipes-devtools/binutils/binutils/0045-CVE-2025-11083.patch diff --git a/meta/recipes-devtools/binutils/binutils-2.38.inc b/meta/recipes-devtools/binutils/binutils-2.38.inc index 0fd950e694..2e978edc6f 100644 --- a/meta/recipes-devtools/binutils/binutils-2.38.inc +++ b/meta/recipes-devtools/binutils/binutils-2.38.inc @@ -81,5 +81,6 @@ SRC_URI = "\ file://0043-CVE-2025-7546.patch \ file://0043-CVE-2025-7545.patch \ file://0044-CVE-2025-11082.patch \ + file://0045-CVE-2025-11083.patch \ " S = "${WORKDIR}/git" diff --git a/meta/recipes-devtools/binutils/binutils/0045-CVE-2025-11083.patch b/meta/recipes-devtools/binutils/binutils/0045-CVE-2025-11083.patch new file mode 100644 index 0000000000..d303f651b8 --- /dev/null +++ b/meta/recipes-devtools/binutils/binutils/0045-CVE-2025-11083.patch @@ -0,0 +1,77 @@ +From 9ca499644a21ceb3f946d1c179c38a83be084490 Mon Sep 17 00:00:00 2001 +From: "H.J. Lu" +Date: Thu, 18 Sep 2025 16:59:25 -0700 +Subject: [PATCH] elf: Don't match corrupt section header in linker input + +Don't swap in nor match corrupt section header in linker input to avoid +linker crash later. + + PR ld/33457 + * elfcode.h (elf_swap_shdr_in): Changed to return bool. Return + false for corrupt section header in linker input. + (elf_object_p): Reject if elf_swap_shdr_in returns false. + +Signed-off-by: H.J. Lu + +CVE: CVE-2025-11083 +Upstream-Status: Backport [https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=9ca499644a21ceb3f946d1c179c38a83be084490] +Signed-off-by: Peter Marko +--- + bfd/elfcode.h | 14 +++++++++----- + 1 file changed, 9 insertions(+), 5 deletions(-) + +diff --git a/bfd/elfcode.h b/bfd/elfcode.h +index 9c65852e103..5224a1abee6 100644 +--- a/bfd/elfcode.h ++++ b/bfd/elfcode.h +@@ -298,7 +298,7 @@ elf_swap_ehdr_out (bfd *abfd, + /* Translate an ELF section header table entry in external format into an + ELF section header table entry in internal format. */ + +-static void ++static bool + elf_swap_shdr_in (bfd *abfd, + const Elf_External_Shdr *src, + Elf_Internal_Shdr *dst) +@@ -328,6 +328,9 @@ elf_swap_shdr_in (bfd *abfd, + if (!abfd->read_only) + _bfd_error_handler (_("warning: %pB has a section " + "extending past end of file"), abfd); ++ /* PR ld/33457: Don't match corrupt section header. */ ++ if (abfd->is_linker_input) ++ return false; + abfd->read_only = 1; + } + } +@@ -337,6 +340,7 @@ elf_swap_shdr_in (bfd *abfd, + dst->sh_entsize = H_GET_WORD (abfd, src->sh_entsize); + dst->bfd_section = NULL; + dst->contents = NULL; ++ return true; + } + + /* Translate an ELF section header table entry in internal format into an +@@ -629,9 +633,9 @@ elf_object_p (bfd *abfd) + + /* Read the first section header at index 0, and convert to internal + form. */ +- if (bfd_bread (&x_shdr, sizeof x_shdr, abfd) != sizeof (x_shdr)) ++ if (bfd_bread (&x_shdr, sizeof x_shdr, abfd) != sizeof (x_shdr) ++ || !elf_swap_shdr_in (abfd, &x_shdr, &i_shdr)) + goto got_no_match; +- elf_swap_shdr_in (abfd, &x_shdr, &i_shdr); + + /* If the section count is zero, the actual count is in the first + section header. */ +@@ -717,9 +721,9 @@ elf_object_p (bfd *abfd) + to internal form. */ + for (shindex = 1; shindex < i_ehdrp->e_shnum; shindex++) + { +- if (bfd_bread (&x_shdr, sizeof x_shdr, abfd) != sizeof (x_shdr)) ++ if (bfd_bread (&x_shdr, sizeof x_shdr, abfd) != sizeof (x_shdr) ++ || !elf_swap_shdr_in (abfd, &x_shdr, i_shdrp + shindex)) + goto got_no_match; +- elf_swap_shdr_in (abfd, &x_shdr, i_shdrp + shindex); + + /* Sanity check sh_link and sh_info. */ + if (i_shdrp[shindex].sh_link >= num_sec) From patchwork Tue Oct 14 22:44:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79A0ACCD196 for ; Tue, 14 Oct 2025 22:45:07 +0000 (UTC) Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by mx.groups.io with SMTP id smtpd.web11.2574.1760481906096794634 for ; Tue, 14 Oct 2025 15:45:06 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=gxb5UxwD; spf=softfail (domain: sakoman.com, ip: 209.85.215.174, mailfrom: steve@sakoman.com) Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-b633b54d05dso3818513a12.2 for ; Tue, 14 Oct 2025 15:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481905; x=1761086705; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=q21DigpU9l1UEm3v45XfhN+9J2X97V2XJEs3SbGkQ+c=; b=gxb5UxwDVY5Epf7mkNtNAcZ1+AHCG6/jyeMsiUQcefS9Qid591xPSQxNeUYIbXweWJ EQ06Y3+S27JMCYcptm5+4L/CP+XM7Mj5z0QGISc8io+RUu+njEI5G2uEvwrIdP52X5d6 hpRzLVRg/IYhoRAxlICasXJSLzToGIAlQn3WdoHxn6opEunzs3F01r/E+xW+gEPCEZJq 2hoHWqXGwr89GnZP2uP+t3q1VC11fjooN0tjWRP/kJtKqMIHf6bEnNFKqHf3acKyDddh vld6gJtifRPdzKLIaNWwCjjLYVlgCkDe4h+2VvI9lpWBCXZ6CLsQoyEIhJvfaMtdAwWK KNIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481905; x=1761086705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q21DigpU9l1UEm3v45XfhN+9J2X97V2XJEs3SbGkQ+c=; b=Kx4t/fpFjYN8hSrOCkUrtNCcNKYEhyFKnnHryjtcokXIPyal/ufXWyVc4UNrl6zNfa cnARf42gEWGASCpXXYj9ChnOOASp3RLpiDg47V5OgmWGQIeEsx6SgwjBFZQmcAqctWUv toccjxMDzHvnJebCHWXCzzLb/HObI95o1DH42ZPZd/ul7lvucqQ1BbpqAmrSi6lLJlir yLI5404klsgS4T+060sC2PRzRbtusxRk07eYnW05owc9SdRLEXoPZpiLXSo8qAjxpm74 QrrdKA+C2wtS5cQrC48Kx7zHSQS9nk/BlYwza3djLmy4KrWlyD9KD+YU4fS4UNoY9Omu aTgA== X-Gm-Message-State: AOJu0Yzc3J+8dISlDXUbM2gOBIn1blGTSzs6OJZsHW3KjglXHFmBqbiS F3Y9Dp2hNdp8SKLR7Is5W6Ig2D52wbTwsny6UfqIG11VBUBZMqyzvsIlvoV6Hr21FKMXA/XZu4Z hr0d2 X-Gm-Gg: ASbGncuFvc8gqhutNjSaDmQoJ3e2cjUVvf9bPfY1qGEAwbhmIrJYr0AvJOviu3WDaF+ X4IMY9EGK+EP4sasiBvR0V0JDRdiK226UxWPanG+OgPPS4F3dKCNAhWQF7luKCPYLVsPFrhj7qm SKH5ZTAuf5L4t74c0lfDA1iDJnsh2He0a1jnVVy4o8UGWODzPHtDCgBXUnCDlNd8jSQUVkV0KVr ZTQAfARhRlrFldwXX3WkVcOMW8wwA3tXbZ5d5ZJ1hGrFFFdbomelO3UZxR6M9bOK+lqTQh9k8Tl Qu2KHCUOLJXXz/HDN0bqu7Tfe4MQEsU5fd++YgCYDMhk91Sgsb1RZXNlXaAyLX3b/Rad+3Ta5tU DpeKvfmZmnENnkR2Aruk435uwrdSRW7C5BP1BUAm6KScs/OgwRbIK2g== X-Google-Smtp-Source: AGHT+IGj1gBxmBLU8v3ca5Ay0OZBH2uLFJukivhDUjDCPTYb1zrqbIlx5S1hSEKOtO5P+ZNxmoLu2Q== X-Received: by 2002:a17:902:ecc6:b0:25c:e2c:6653 with SMTP id d9443c01a7336-290272e1b44mr332290015ad.48.1760481903542; Tue, 14 Oct 2025 15:45:03 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:03 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 04/14] glibc: Remove partial BZ#25847 backport patches Date: Tue, 14 Oct 2025 15:44:41 -0700 Message-ID: <9881dd70305b87945e9649d744bcbc40a1a7b780.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:07 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224858 From: Sunil Dora To facilitate a clean backport of the full 10-commit series addressing the pthread condition variable lost wakeup issue (BZ#25847) in glibc 2.35, remove the existing 8 patches that were applied as a partial backport. The previous partial backport excluded commit: c36fc50781995e6758cae2b6927839d0157f213c ("nptl: Remove g_refs from condition variables") based on guidance from glibc maintainer Florian Weimer(#comment #74) This exclusion was recommended for stable branches to avoid altering the layout of pthread_cond_t, which could introduce ABI incompatibilities. Additionally, the dependent commit dbc5a50d12eff4cb3f782129029d04b8a76f58e7 was not needed in the partial backport. To align with upstream mainline, per maintainer Carlos O'Donell (comment #75), apply the complete 10-commit series for consistency. By removing these patches first, we ensure the subsequent application of the full 10 commits results in cleaner, more reviewable changes without intermixed conflicts or overlaps. Removed patches and corresponding upstream commits: - 0026-PR25847-1.patch: 1db84775f831a1494993ce9c118deaf9537cc50a - 0026-PR25847-2.patch: 0cc973160c23bb67f895bc887dd6942d29f8fee3 - 0026-PR25847-3.patch: b42cc6af11062c260c7dfa91f1c89891366fed3e - 0026-PR25847-4.patch: 4f7b051f8ee3feff1b53b27a906f245afaa9cee1 - 0026-PR25847-5.patch: 929a4764ac90382616b6a21f099192b2475da674 - 0026-PR25847-6.patch: ee6c14ed59d480720721aaacc5fb03213dc153da - 0026-PR25847-7.patch: 4b79e27a5073c02f6bff9aa8f4791230a0ab1867 - 0026-PR25847-8.patch: 91bb902f58264a2fd50fbce8f39a9a290dd23706 Bug reference: https://sourceware.org/bugzilla/show_bug.cgi?id=25847 This change prepares the branch for the full backport in follow-up commits. Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-1.patch | 455 ------------------ .../glibc/glibc/0026-PR25847-2.patch | 144 ------ .../glibc/glibc/0026-PR25847-3.patch | 77 --- .../glibc/glibc/0026-PR25847-4.patch | 117 ----- .../glibc/glibc/0026-PR25847-5.patch | 105 ---- .../glibc/glibc/0026-PR25847-6.patch | 169 ------- .../glibc/glibc/0026-PR25847-7.patch | 160 ------ .../glibc/glibc/0026-PR25847-8.patch | 192 -------- meta/recipes-core/glibc/glibc_2.35.bb | 8 - 9 files changed, 1427 deletions(-) delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-1.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-2.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-3.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-4.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-5.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-6.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-7.patch delete mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-8.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-1.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-1.patch deleted file mode 100644 index 44a2b6772c..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-1.patch +++ /dev/null @@ -1,455 +0,0 @@ -From 31d9848830e496f57d4182b518467c4c63bfd4bd Mon Sep 17 00:00:00 2001 -From: Frank Barrus -Date: Mon, 16 Jun 2025 22:37:54 -0700 -Subject: [PATCH] pthreads NPTL: lost wakeup fix 2 - -This fixes the lost wakeup (from a bug in signal stealing) with a change -in the usage of g_signals[] in the condition variable internal state. -It also completely eliminates the concept and handling of signal stealing, -as well as the need for signalers to block to wait for waiters to wake -up every time there is a G1/G2 switch. This greatly reduces the average -and maximum latency for pthread_cond_signal. - -The g_signals[] field now contains a signal count that is relative to -the current g1_start value. Since it is a 32-bit field, and the LSB is -still reserved (though not currently used anymore), it has a 31-bit value -that corresponds to the low 31 bits of the sequence number in g1_start. -(since g1_start also has an LSB flag, this means bits 31:1 in g_signals -correspond to bits 31:1 in g1_start, plus the current signal count) - -By making the signal count relative to g1_start, there is no longer -any ambiguity or A/B/A issue, and thus any checks before blocking, -including the futex call itself, are guaranteed not to block if the G1/G2 -switch occurs, even if the signal count remains the same. This allows -initially safely blocking in G2 until the switch to G1 occurs, and -then transitioning from G1 to a new G1 or G2, and always being able to -distinguish the state change. This removes the race condition and A/B/A -problems that otherwise ocurred if a late (pre-empted) waiter were to -resume just as the futex call attempted to block on g_signal since -otherwise there was no last opportunity to re-check things like whether -the current G1 group was already closed. - -By fixing these issues, the signal stealing code can be eliminated, -since there is no concept of signal stealing anymore. The code to block -for all waiters to exit g_refs can also be removed, since any waiters -that are still in the g_refs region can be guaranteed to safely wake -up and exit. If there are still any left at this time, they are all -sent one final futex wakeup to ensure that they are not blocked any -longer, but there is no need for the signaller to block and wait for -them to wake up and exit the g_refs region. - -The signal count is then effectively "zeroed" but since it is now -relative to g1_start, this is done by advancing it to a new value that -can be observed by any pending blocking waiters. Any late waiters can -always tell the difference, and can thus just cleanly exit if they are -in a stale G1 or G2. They can never steal a signal from the current -G1 if they are not in the current G1, since the signal value that has -to match in the cmpxchg has the low 31 bits of the g1_start value -contained in it, and that's first checked, and then it won't match if -there's a G1/G2 change. - -Note: the 31-bit sequence number used in g_signals is designed to -handle wrap-around when checking the signal count, but if the entire -31-bit wraparound (2 billion signals) occurs while there is still a -late waiter that has not yet resumed, and it happens to then match -the current g1_start low bits, and the pre-emption occurs after the -normal "closed group" checks (which are 64-bit) but then hits the -futex syscall and signal consuming code, then an A/B/A issue could -still result and cause an incorrect assumption about whether it -should block. This particular scenario seems unlikely in practice. -Note that once awake from the futex, the waiter would notice the -closed group before consuming the signal (since that's still a 64-bit -check that would not be aliased in the wrap-around in g_signals), -so the biggest impact would be blocking on the futex until the next -full wakeup from a G1/G2 switch. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=1db84775f831a1494993ce9c118deaf9537cc50a] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_common.c | 106 +++++++++------------------ - nptl/pthread_cond_wait.c | 144 ++++++++++++------------------------- - 2 files changed, 81 insertions(+), 169 deletions(-) - -diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c -index fb035f72c3..8dd7037923 100644 ---- a/nptl/pthread_cond_common.c -+++ b/nptl/pthread_cond_common.c -@@ -201,7 +201,6 @@ static bool __attribute__ ((unused)) - __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - unsigned int *g1index, int private) - { -- const unsigned int maxspin = 0; - unsigned int g1 = *g1index; - - /* If there is no waiter in G2, we don't do anything. The expression may -@@ -222,85 +221,46 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - * New waiters arriving concurrently with the group switching will all go - into G2 until we atomically make the switch. Waiters existing in G2 - are not affected. -- * Waiters in G1 will be closed out immediately by setting a flag in -- __g_signals, which will prevent waiters from blocking using a futex on -- __g_signals and also notifies them that the group is closed. As a -- result, they will eventually remove their group reference, allowing us -- to close switch group roles. */ -- -- /* First, set the closed flag on __g_signals. This tells waiters that are -- about to wait that they shouldn't do that anymore. This basically -- serves as an advance notificaton of the upcoming change to __g1_start; -- waiters interpret it as if __g1_start was larger than their waiter -- sequence position. This allows us to change __g1_start after waiting -- for all existing waiters with group references to leave, which in turn -- makes recovery after stealing a signal simpler because it then can be -- skipped if __g1_start indicates that the group is closed (otherwise, -- we would have to recover always because waiters don't know how big their -- groups are). Relaxed MO is fine. */ -- atomic_fetch_or_relaxed (cond->__data.__g_signals + g1, 1); -- -- /* Wait until there are no group references anymore. The fetch-or operation -- injects us into the modification order of __g_refs; release MO ensures -- that waiters incrementing __g_refs after our fetch-or see the previous -- changes to __g_signals and to __g1_start that had to happen before we can -- switch this G1 and alias with an older group (we have two groups, so -- aliasing requires switching group roles twice). Note that nobody else -- can have set the wake-request flag, so we do not have to act upon it. -- -- Also note that it is harmless if older waiters or waiters from this G1 -- get a group reference after we have quiesced the group because it will -- remain closed for them either because of the closed flag in __g_signals -- or the later update to __g1_start. New waiters will never arrive here -- but instead continue to go into the still current G2. */ -- unsigned r = atomic_fetch_or_release (cond->__data.__g_refs + g1, 0); -- while ((r >> 1) > 0) -- { -- for (unsigned int spin = maxspin; ((r >> 1) > 0) && (spin > 0); spin--) -- { -- /* TODO Back off. */ -- r = atomic_load_relaxed (cond->__data.__g_refs + g1); -- } -- if ((r >> 1) > 0) -- { -- /* There is still a waiter after spinning. Set the wake-request -- flag and block. Relaxed MO is fine because this is just about -- this futex word. -- -- Update r to include the set wake-request flag so that the upcoming -- futex_wait only blocks if the flag is still set (otherwise, we'd -- violate the basic client-side futex protocol). */ -- r = atomic_fetch_or_relaxed (cond->__data.__g_refs + g1, 1) | 1; -- -- if ((r >> 1) > 0) -- futex_wait_simple (cond->__data.__g_refs + g1, r, private); -- /* Reload here so we eventually see the most recent value even if we -- do not spin. */ -- r = atomic_load_relaxed (cond->__data.__g_refs + g1); -- } -- } -- /* Acquire MO so that we synchronize with the release operation that waiters -- use to decrement __g_refs and thus happen after the waiters we waited -- for. */ -- atomic_thread_fence_acquire (); -+ * Waiters in G1 will be closed out immediately by the advancing of -+ __g_signals to the next "lowseq" (low 31 bits of the new g1_start), -+ which will prevent waiters from blocking using a futex on -+ __g_signals since it provides enough signals for all possible -+ remaining waiters. As a result, they can each consume a signal -+ and they will eventually remove their group reference. */ - - /* Update __g1_start, which finishes closing this group. The value we add - will never be negative because old_orig_size can only be zero when we - switch groups the first time after a condvar was initialized, in which -- case G1 will be at index 1 and we will add a value of 1. See above for -- why this takes place after waiting for quiescence of the group. -+ case G1 will be at index 1 and we will add a value of 1. - Relaxed MO is fine because the change comes with no additional - constraints that others would have to observe. */ - __condvar_add_g1_start_relaxed (cond, - (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); - -- /* Now reopen the group, thus enabling waiters to again block using the -- futex controlled by __g_signals. Release MO so that observers that see -- no signals (and thus can block) also see the write __g1_start and thus -- that this is now a new group (see __pthread_cond_wait_common for the -- matching acquire MO loads). */ -- atomic_store_release (cond->__data.__g_signals + g1, 0); -- -+ unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; -+ -+ /* If any waiters still hold group references (and thus could be blocked), -+ then wake them all up now and prevent any running ones from blocking. -+ This is effectively a catch-all for any possible current or future -+ bugs that can allow the group size to reach 0 before all G1 waiters -+ have been awakened or at least given signals to consume, or any -+ other case that can leave blocked (or about to block) older waiters.. */ -+ if ((atomic_fetch_or_release (cond->__data.__g_refs + g1, 0) >> 1) > 0) -+ { -+ /* First advance signals to the end of the group (i.e. enough signals -+ for the entire G1 group) to ensure that waiters which have not -+ yet blocked in the futex will not block. -+ Note that in the vast majority of cases, this should never -+ actually be necessary, since __g_signals will have enough -+ signals for the remaining g_refs waiters. As an optimization, -+ we could check this first before proceeding, although that -+ could still leave the potential for futex lost wakeup bugs -+ if the signal count was non-zero but the futex wakeup -+ was somehow lost. */ -+ atomic_store_release (cond->__data.__g_signals + g1, lowseq); -+ -+ futex_wake (cond->__data.__g_signals + g1, INT_MAX, private); -+ } - /* At this point, the old G1 is now a valid new G2 (but not in use yet). - No old waiter can neither grab a signal nor acquire a reference without - noticing that __g1_start is larger. -@@ -311,6 +271,10 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - g1 ^= 1; - *g1index ^= 1; - -+ /* Now advance the new G1 g_signals to the new lowseq, giving it -+ an effective signal count of 0 to start. */ -+ atomic_store_release (cond->__data.__g_signals + g1, lowseq); -+ - /* These values are just observed by signalers, and thus protected by the - lock. */ - unsigned int orig_size = wseq - (old_g1_start + old_orig_size); -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index 20c348a503..1cb3dbf7b0 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -238,9 +238,7 @@ __condvar_cleanup_waiting (void *arg) - signaled), and a reference count. - - The group reference count is used to maintain the number of waiters that -- are using the group's futex. Before a group can change its role, the -- reference count must show that no waiters are using the futex anymore; this -- prevents ABA issues on the futex word. -+ are using the group's futex. - - To represent which intervals in the waiter sequence the groups cover (and - thus also which group slot contains G1 or G2), we use a 64b counter to -@@ -300,11 +298,12 @@ __condvar_cleanup_waiting (void *arg) - last reference. - * Reference count used by waiters concurrently with signalers that have - acquired the condvar-internal lock. -- __g_signals: The number of signals that can still be consumed. -+ __g_signals: The number of signals that can still be consumed, relative to -+ the current g1_start. (i.e. bits 31 to 1 of __g_signals are bits -+ 31 to 1 of g1_start with the signal count added) - * Used as a futex word by waiters. Used concurrently by waiters and - signalers. -- * LSB is true iff this group has been completely signaled (i.e., it is -- closed). -+ * LSB is currently reserved and 0. - __g_size: Waiters remaining in this group (i.e., which have not been - signaled yet. - * Accessed by signalers and waiters that cancel waiting (both do so only -@@ -328,18 +327,6 @@ __condvar_cleanup_waiting (void *arg) - sufficient because if a waiter can see a sufficiently large value, it could - have also consume a signal in the waiters group. - -- Waiters try to grab a signal from __g_signals without holding a reference -- count, which can lead to stealing a signal from a more recent group after -- their own group was already closed. They cannot always detect whether they -- in fact did because they do not know when they stole, but they can -- conservatively add a signal back to the group they stole from; if they -- did so unnecessarily, all that happens is a spurious wake-up. To make this -- even less likely, __g1_start contains the index of the current g2 too, -- which allows waiters to check if there aliasing on the group slots; if -- there wasn't, they didn't steal from the current G1, which means that the -- G1 they stole from must have been already closed and they do not need to -- fix anything. -- - It is essential that the last field in pthread_cond_t is __g_signals[1]: - The previous condvar used a pointer-sized field in pthread_cond_t, so a - PTHREAD_COND_INITIALIZER from that condvar implementation might only -@@ -435,6 +422,9 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - { - while (1) - { -+ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -+ unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -+ - /* Spin-wait first. - Note that spinning first without checking whether a timeout - passed might lead to what looks like a spurious wake-up even -@@ -446,35 +436,45 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - having to compare against the current time seems to be the right - choice from a performance perspective for most use cases. */ - unsigned int spin = maxspin; -- while (signals == 0 && spin > 0) -+ while (spin > 0 && ((int)(signals - lowseq) < 2)) - { - /* Check that we are not spinning on a group that's already - closed. */ -- if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) -- goto done; -+ if (seq < (g1_start >> 1)) -+ break; - - /* TODO Back off. */ - - /* Reload signals. See above for MO. */ - signals = atomic_load_acquire (cond->__data.__g_signals + g); -+ g1_start = __condvar_load_g1_start_relaxed (cond); -+ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - spin--; - } - -- /* If our group will be closed as indicated by the flag on signals, -- don't bother grabbing a signal. */ -- if (signals & 1) -- goto done; -- -- /* If there is an available signal, don't block. */ -- if (signals != 0) -+ if (seq < (g1_start >> 1)) -+ { -+ /* If the group is closed already, -+ then this waiter originally had enough extra signals to -+ consume, up until the time its group was closed. */ -+ goto done; -+ } -+ -+ /* If there is an available signal, don't block. -+ If __g1_start has advanced at all, then we must be in G1 -+ by now, perhaps in the process of switching back to an older -+ G2, but in either case we're allowed to consume the available -+ signal and should not block anymore. */ -+ if ((int)(signals - lowseq) >= 2) - break; - - /* No signals available after spinning, so prepare to block. - We first acquire a group reference and use acquire MO for that so - that we synchronize with the dummy read-modify-write in - __condvar_quiesce_and_switch_g1 if we read from that. In turn, -- in this case this will make us see the closed flag on __g_signals -- that designates a concurrent attempt to reuse the group's slot. -+ in this case this will make us see the advancement of __g_signals -+ to the upcoming new g1_start that occurs with a concurrent -+ attempt to reuse the group's slot. - We use acquire MO for the __g_signals check to make the - __g1_start check work (see spinning above). - Note that the group reference acquisition will not mask the -@@ -482,15 +482,24 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - an atomic read-modify-write operation and thus extend the release - sequence. */ - atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); -- if (((atomic_load_acquire (cond->__data.__g_signals + g) & 1) != 0) -- || (seq < (__condvar_load_g1_start_relaxed (cond) >> 1))) -+ signals = atomic_load_acquire (cond->__data.__g_signals + g); -+ g1_start = __condvar_load_g1_start_relaxed (cond); -+ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -+ -+ if (seq < (g1_start >> 1)) - { -- /* Our group is closed. Wake up any signalers that might be -- waiting. */ -+ /* group is closed already, so don't block */ - __condvar_dec_grefs (cond, g, private); - goto done; - } - -+ if ((int)(signals - lowseq) >= 2) -+ { -+ /* a signal showed up or G1/G2 switched after we grabbed the refcount */ -+ __condvar_dec_grefs (cond, g, private); -+ break; -+ } -+ - // Now block. - struct _pthread_cleanup_buffer buffer; - struct _condvar_cleanup_buffer cbuffer; -@@ -501,7 +510,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); - - err = __futex_abstimed_wait_cancelable64 ( -- cond->__data.__g_signals + g, 0, clockid, abstime, private); -+ cond->__data.__g_signals + g, signals, clockid, abstime, private); - - __pthread_cleanup_pop (&buffer, 0); - -@@ -524,6 +533,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - signals = atomic_load_acquire (cond->__data.__g_signals + g); - } - -+ if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) -+ goto done; - } - /* Try to grab a signal. Use acquire MO so that we see an up-to-date value - of __g1_start below (see spinning above for a similar case). In -@@ -532,69 +543,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, - &signals, signals - 2)); - -- /* We consumed a signal but we could have consumed from a more recent group -- that aliased with ours due to being in the same group slot. If this -- might be the case our group must be closed as visible through -- __g1_start. */ -- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -- if (seq < (g1_start >> 1)) -- { -- /* We potentially stole a signal from a more recent group but we do not -- know which group we really consumed from. -- We do not care about groups older than current G1 because they are -- closed; we could have stolen from these, but then we just add a -- spurious wake-up for the current groups. -- We will never steal a signal from current G2 that was really intended -- for G2 because G2 never receives signals (until it becomes G1). We -- could have stolen a signal from G2 that was conservatively added by a -- previous waiter that also thought it stole a signal -- but given that -- that signal was added unnecessarily, it's not a problem if we steal -- it. -- Thus, the remaining case is that we could have stolen from the current -- G1, where "current" means the __g1_start value we observed. However, -- if the current G1 does not have the same slot index as we do, we did -- not steal from it and do not need to undo that. This is the reason -- for putting a bit with G2's index into__g1_start as well. */ -- if (((g1_start & 1) ^ 1) == g) -- { -- /* We have to conservatively undo our potential mistake of stealing -- a signal. We can stop trying to do that when the current G1 -- changes because other spinning waiters will notice this too and -- __condvar_quiesce_and_switch_g1 has checked that there are no -- futex waiters anymore before switching G1. -- Relaxed MO is fine for the __g1_start load because we need to -- merely be able to observe this fact and not have to observe -- something else as well. -- ??? Would it help to spin for a little while to see whether the -- current G1 gets closed? This might be worthwhile if the group is -- small or close to being closed. */ -- unsigned int s = atomic_load_relaxed (cond->__data.__g_signals + g); -- while (__condvar_load_g1_start_relaxed (cond) == g1_start) -- { -- /* Try to add a signal. We don't need to acquire the lock -- because at worst we can cause a spurious wake-up. If the -- group is in the process of being closed (LSB is true), this -- has an effect similar to us adding a signal. */ -- if (((s & 1) != 0) -- || atomic_compare_exchange_weak_relaxed -- (cond->__data.__g_signals + g, &s, s + 2)) -- { -- /* If we added a signal, we also need to add a wake-up on -- the futex. We also need to do that if we skipped adding -- a signal because the group is being closed because -- while __condvar_quiesce_and_switch_g1 could have closed -- the group, it might stil be waiting for futex waiters to -- leave (and one of those waiters might be the one we stole -- the signal from, which cause it to block using the -- futex). */ -- futex_wake (cond->__data.__g_signals + g, 1, private); -- break; -- } -- /* TODO Back off. */ -- } -- } -- } -- - done: - - /* Confirm that we have been woken. We do that before acquiring the mutex --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-2.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-2.patch deleted file mode 100644 index cb89431769..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-2.patch +++ /dev/null @@ -1,144 +0,0 @@ -From 6aab1191e35a3da66e8c49d95178a9d77c119a1f Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Mon, 16 Jun 2025 23:17:53 -0700 -Subject: [PATCH] nptl: Update comments and indentation for new condvar - implementation - -Some comments were wrong after the most recent commit. This fixes that. -Also fixing indentation where it was using spaces instead of tabs. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=0cc973160c23bb67f895bc887dd6942d29f8fee3] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_common.c | 5 +++-- - nptl/pthread_cond_wait.c | 39 +++++++++++++++++++------------------- - 2 files changed, 22 insertions(+), 22 deletions(-) - -diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c -index 8dd7037923..306a207dd6 100644 ---- a/nptl/pthread_cond_common.c -+++ b/nptl/pthread_cond_common.c -@@ -221,8 +221,9 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - * New waiters arriving concurrently with the group switching will all go - into G2 until we atomically make the switch. Waiters existing in G2 - are not affected. -- * Waiters in G1 will be closed out immediately by the advancing of -- __g_signals to the next "lowseq" (low 31 bits of the new g1_start), -+ * Waiters in G1 have already received a signal and been woken. If they -+ haven't woken yet, they will be closed out immediately by the advancing -+ of __g_signals to the next "lowseq" (low 31 bits of the new g1_start), - which will prevent waiters from blocking using a futex on - __g_signals since it provides enough signals for all possible - remaining waiters. As a result, they can each consume a signal -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index 1cb3dbf7b0..cee1968756 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -249,7 +249,7 @@ __condvar_cleanup_waiting (void *arg) - figure out whether they are in a group that has already been completely - signaled (i.e., if the current G1 starts at a later position that the - waiter's position). Waiters cannot determine whether they are currently -- in G2 or G1 -- but they do not have too because all they are interested in -+ in G2 or G1 -- but they do not have to because all they are interested in - is whether there are available signals, and they always start in G2 (whose - group slot they know because of the bit in the waiter sequence. Signalers - will simply fill the right group until it is completely signaled and can -@@ -412,7 +412,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - } - - /* Now wait until a signal is available in our group or it is closed. -- Acquire MO so that if we observe a value of zero written after group -+ Acquire MO so that if we observe (signals == lowseq) after group - switching in __condvar_quiesce_and_switch_g1, we synchronize with that - store and will see the prior update of __g1_start done while switching - groups too. */ -@@ -422,8 +422,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - { - while (1) - { -- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -- unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -+ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -+ unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - - /* Spin-wait first. - Note that spinning first without checking whether a timeout -@@ -447,21 +447,21 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - - /* Reload signals. See above for MO. */ - signals = atomic_load_acquire (cond->__data.__g_signals + g); -- g1_start = __condvar_load_g1_start_relaxed (cond); -- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -+ g1_start = __condvar_load_g1_start_relaxed (cond); -+ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - spin--; - } - -- if (seq < (g1_start >> 1)) -+ if (seq < (g1_start >> 1)) - { -- /* If the group is closed already, -+ /* If the group is closed already, - then this waiter originally had enough extra signals to - consume, up until the time its group was closed. */ - goto done; -- } -+ } - - /* If there is an available signal, don't block. -- If __g1_start has advanced at all, then we must be in G1 -+ If __g1_start has advanced at all, then we must be in G1 - by now, perhaps in the process of switching back to an older - G2, but in either case we're allowed to consume the available - signal and should not block anymore. */ -@@ -483,22 +483,23 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - sequence. */ - atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); - signals = atomic_load_acquire (cond->__data.__g_signals + g); -- g1_start = __condvar_load_g1_start_relaxed (cond); -- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -+ g1_start = __condvar_load_g1_start_relaxed (cond); -+ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - -- if (seq < (g1_start >> 1)) -+ if (seq < (g1_start >> 1)) - { -- /* group is closed already, so don't block */ -+ /* group is closed already, so don't block */ - __condvar_dec_grefs (cond, g, private); - goto done; - } - - if ((int)(signals - lowseq) >= 2) - { -- /* a signal showed up or G1/G2 switched after we grabbed the refcount */ -+ /* a signal showed up or G1/G2 switched after we grabbed the -+ refcount */ - __condvar_dec_grefs (cond, g, private); - break; -- } -+ } - - // Now block. - struct _pthread_cleanup_buffer buffer; -@@ -536,10 +537,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) - goto done; - } -- /* Try to grab a signal. Use acquire MO so that we see an up-to-date value -- of __g1_start below (see spinning above for a similar case). In -- particular, if we steal from a more recent group, we will also see a -- more recent __g1_start below. */ -+ /* Try to grab a signal. See above for MO. (if we do another loop -+ iteration we need to see the correct value of g1_start) */ - while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, - &signals, signals - 2)); - --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-3.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-3.patch deleted file mode 100644 index 4cfcca846c..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-3.patch +++ /dev/null @@ -1,77 +0,0 @@ -From 28a5082045429fdc5a4744d45fdc5b5202528eaa Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Mon, 16 Jun 2025 23:29:49 -0700 -Subject: [PATCH] nptl: Remove unnecessary catch-all-wake in condvar group - switch - -This wake is unnecessary. We only switch groups after every sleeper in a group -has been woken. Sure, they may take a while to actually wake up and may still -hold a reference, but waking them a second time doesn't speed that up. Instead -this just makes the code more complicated and may hide problems. - -In particular this safety wake wouldn't even have helped with the bug that was -fixed by Barrus' patch: The bug there was that pthread_cond_signal would not -switch g1 when it should, so we wouldn't even have entered this code path. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=b42cc6af11062c260c7dfa91f1c89891366fed3e] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_common.c | 30 +----------------------------- - 1 file changed, 1 insertion(+), 29 deletions(-) - -diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c -index 306a207dd6..f976a533a1 100644 ---- a/nptl/pthread_cond_common.c -+++ b/nptl/pthread_cond_common.c -@@ -221,13 +221,7 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - * New waiters arriving concurrently with the group switching will all go - into G2 until we atomically make the switch. Waiters existing in G2 - are not affected. -- * Waiters in G1 have already received a signal and been woken. If they -- haven't woken yet, they will be closed out immediately by the advancing -- of __g_signals to the next "lowseq" (low 31 bits of the new g1_start), -- which will prevent waiters from blocking using a futex on -- __g_signals since it provides enough signals for all possible -- remaining waiters. As a result, they can each consume a signal -- and they will eventually remove their group reference. */ -+ * Waiters in G1 have already received a signal and been woken. */ - - /* Update __g1_start, which finishes closing this group. The value we add - will never be negative because old_orig_size can only be zero when we -@@ -240,28 +234,6 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - - unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; - -- /* If any waiters still hold group references (and thus could be blocked), -- then wake them all up now and prevent any running ones from blocking. -- This is effectively a catch-all for any possible current or future -- bugs that can allow the group size to reach 0 before all G1 waiters -- have been awakened or at least given signals to consume, or any -- other case that can leave blocked (or about to block) older waiters.. */ -- if ((atomic_fetch_or_release (cond->__data.__g_refs + g1, 0) >> 1) > 0) -- { -- /* First advance signals to the end of the group (i.e. enough signals -- for the entire G1 group) to ensure that waiters which have not -- yet blocked in the futex will not block. -- Note that in the vast majority of cases, this should never -- actually be necessary, since __g_signals will have enough -- signals for the remaining g_refs waiters. As an optimization, -- we could check this first before proceeding, although that -- could still leave the potential for futex lost wakeup bugs -- if the signal count was non-zero but the futex wakeup -- was somehow lost. */ -- atomic_store_release (cond->__data.__g_signals + g1, lowseq); -- -- futex_wake (cond->__data.__g_signals + g1, INT_MAX, private); -- } - /* At this point, the old G1 is now a valid new G2 (but not in use yet). - No old waiter can neither grab a signal nor acquire a reference without - noticing that __g1_start is larger. --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-4.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-4.patch deleted file mode 100644 index f8674d62ae..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-4.patch +++ /dev/null @@ -1,117 +0,0 @@ -From 16b9af737c77b153fca4f36cbdbe94f7416c0b42 Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Mon, 16 Jun 2025 23:38:40 -0700 -Subject: [PATCH] nptl: Remove unnecessary quadruple check in pthread_cond_wait - -pthread_cond_wait was checking whether it was in a closed group no less than -four times. Checking once is enough. Here are the four checks: - -1. While spin-waiting. This was dead code: maxspin is set to 0 and has been - for years. -2. Before deciding to go to sleep, and before incrementing grefs: I kept this -3. After incrementing grefs. There is no reason to think that the group would - close while we do an atomic increment. Obviously it could close at any - point, but that doesn't mean we have to recheck after every step. This - check was equally good as check 2, except it has to do more work. -4. When we find ourselves in a group that has a signal. We only get here after - we check that we're not in a closed group. There is no need to check again. - The check would only have helped in cases where the compare_exchange in the - next line would also have failed. Relying on the compare_exchange is fine. - -Removing the duplicate checks clarifies the code. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=4f7b051f8ee3feff1b53b27a906f245afaa9cee1] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_wait.c | 49 ---------------------------------------- - 1 file changed, 49 deletions(-) - -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index cee1968756..47e834cade 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -366,7 +366,6 @@ static __always_inline int - __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - clockid_t clockid, const struct __timespec64 *abstime) - { -- const int maxspin = 0; - int err; - int result = 0; - -@@ -425,33 +424,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - -- /* Spin-wait first. -- Note that spinning first without checking whether a timeout -- passed might lead to what looks like a spurious wake-up even -- though we should return ETIMEDOUT (e.g., if the caller provides -- an absolute timeout that is clearly in the past). However, -- (1) spurious wake-ups are allowed, (2) it seems unlikely that a -- user will (ab)use pthread_cond_wait as a check for whether a -- point in time is in the past, and (3) spinning first without -- having to compare against the current time seems to be the right -- choice from a performance perspective for most use cases. */ -- unsigned int spin = maxspin; -- while (spin > 0 && ((int)(signals - lowseq) < 2)) -- { -- /* Check that we are not spinning on a group that's already -- closed. */ -- if (seq < (g1_start >> 1)) -- break; -- -- /* TODO Back off. */ -- -- /* Reload signals. See above for MO. */ -- signals = atomic_load_acquire (cond->__data.__g_signals + g); -- g1_start = __condvar_load_g1_start_relaxed (cond); -- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -- spin--; -- } -- - if (seq < (g1_start >> 1)) - { - /* If the group is closed already, -@@ -482,24 +454,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - an atomic read-modify-write operation and thus extend the release - sequence. */ - atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); -- signals = atomic_load_acquire (cond->__data.__g_signals + g); -- g1_start = __condvar_load_g1_start_relaxed (cond); -- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -- -- if (seq < (g1_start >> 1)) -- { -- /* group is closed already, so don't block */ -- __condvar_dec_grefs (cond, g, private); -- goto done; -- } -- -- if ((int)(signals - lowseq) >= 2) -- { -- /* a signal showed up or G1/G2 switched after we grabbed the -- refcount */ -- __condvar_dec_grefs (cond, g, private); -- break; -- } - - // Now block. - struct _pthread_cleanup_buffer buffer; -@@ -533,9 +487,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - /* Reload signals. See above for MO. */ - signals = atomic_load_acquire (cond->__data.__g_signals + g); - } -- -- if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) -- goto done; - } - /* Try to grab a signal. See above for MO. (if we do another loop - iteration we need to see the correct value of g1_start) */ --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-5.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-5.patch deleted file mode 100644 index 16fe6f8460..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-5.patch +++ /dev/null @@ -1,105 +0,0 @@ -From d9ffb50dc55f77e584a5d0275eea758c7a6b04e3 Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Mon, 16 Jun 2025 23:53:35 -0700 -Subject: [PATCH] nptl: Use a single loop in pthread_cond_wait instaed of a - nested loop - -The loop was a little more complicated than necessary. There was only one -break statement out of the inner loop, and the outer loop was nearly empty. -So just remove the outer loop, moving its code to the one break statement in -the inner loop. This allows us to replace all gotos with break statements. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=929a4764ac90382616b6a21f099192b2475da674] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_wait.c | 41 +++++++++++++++++++--------------------- - 1 file changed, 19 insertions(+), 22 deletions(-) - -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index 47e834cade..5c86880105 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -410,17 +410,15 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - return err; - } - -- /* Now wait until a signal is available in our group or it is closed. -- Acquire MO so that if we observe (signals == lowseq) after group -- switching in __condvar_quiesce_and_switch_g1, we synchronize with that -- store and will see the prior update of __g1_start done while switching -- groups too. */ -- unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); -- -- do -- { -+ - while (1) - { -+ /* Now wait until a signal is available in our group or it is closed. -+ Acquire MO so that if we observe (signals == lowseq) after group -+ switching in __condvar_quiesce_and_switch_g1, we synchronize with that -+ store and will see the prior update of __g1_start done while switching -+ groups too. */ -+ unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - -@@ -429,7 +427,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - /* If the group is closed already, - then this waiter originally had enough extra signals to - consume, up until the time its group was closed. */ -- goto done; -+ break; - } - - /* If there is an available signal, don't block. -@@ -438,8 +436,16 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - G2, but in either case we're allowed to consume the available - signal and should not block anymore. */ - if ((int)(signals - lowseq) >= 2) -- break; -- -+ { -+ /* Try to grab a signal. See above for MO. (if we do another loop -+ iteration we need to see the correct value of g1_start) */ -+ if (atomic_compare_exchange_weak_acquire ( -+ cond->__data.__g_signals + g, -+ &signals, signals - 2)) -+ break; -+ else -+ continue; -+ } - /* No signals available after spinning, so prepare to block. - We first acquire a group reference and use acquire MO for that so - that we synchronize with the dummy read-modify-write in -@@ -479,21 +485,12 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - the lock during cancellation is not possible. */ - __condvar_cancel_waiting (cond, seq, g, private); - result = err; -- goto done; -+ break; - } - else - __condvar_dec_grefs (cond, g, private); - -- /* Reload signals. See above for MO. */ -- signals = atomic_load_acquire (cond->__data.__g_signals + g); - } -- } -- /* Try to grab a signal. See above for MO. (if we do another loop -- iteration we need to see the correct value of g1_start) */ -- while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, -- &signals, signals - 2)); -- -- done: - - /* Confirm that we have been woken. We do that before acquiring the mutex - to allow for execution of pthread_cond_destroy while having acquired the --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-6.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-6.patch deleted file mode 100644 index cf87e21ddd..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-6.patch +++ /dev/null @@ -1,169 +0,0 @@ -From a2faee6d0dac6e5232255da9afda4d9ed6cfb6e5 Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Tue, 17 Jun 2025 01:37:12 -0700 -Subject: [PATCH] nptl: Fix indentation - -In my previous change I turned a nested loop into a simple loop. I'm doing -the resulting indentation changes in a separate commit to make the diff on -the previous commit easier to review. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=ee6c14ed59d480720721aaacc5fb03213dc153da] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_wait.c | 132 ++++++++++++++++----------------------- - 1 file changed, 54 insertions(+), 78 deletions(-) - -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index 5c86880105..104ebd48ca 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -410,87 +410,63 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - return err; - } - -- -- while (1) -- { -- /* Now wait until a signal is available in our group or it is closed. -- Acquire MO so that if we observe (signals == lowseq) after group -- switching in __condvar_quiesce_and_switch_g1, we synchronize with that -- store and will see the prior update of __g1_start done while switching -- groups too. */ -- unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); -- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -- unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -- -- if (seq < (g1_start >> 1)) -- { -- /* If the group is closed already, -- then this waiter originally had enough extra signals to -- consume, up until the time its group was closed. */ -- break; -- } -- -- /* If there is an available signal, don't block. -- If __g1_start has advanced at all, then we must be in G1 -- by now, perhaps in the process of switching back to an older -- G2, but in either case we're allowed to consume the available -- signal and should not block anymore. */ -- if ((int)(signals - lowseq) >= 2) -- { -- /* Try to grab a signal. See above for MO. (if we do another loop -- iteration we need to see the correct value of g1_start) */ -- if (atomic_compare_exchange_weak_acquire ( -- cond->__data.__g_signals + g, -+ while (1) -+ { -+ /* Now wait until a signal is available in our group or it is closed. -+ Acquire MO so that if we observe (signals == lowseq) after group -+ switching in __condvar_quiesce_and_switch_g1, we synchronize with that -+ store and will see the prior update of __g1_start done while switching -+ groups too. */ -+ unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); -+ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -+ unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; -+ -+ if (seq < (g1_start >> 1)) -+ { -+ /* If the group is closed already, -+ then this waiter originally had enough extra signals to -+ consume, up until the time its group was closed. */ -+ break; -+ } -+ -+ /* If there is an available signal, don't block. -+ If __g1_start has advanced at all, then we must be in G1 -+ by now, perhaps in the process of switching back to an older -+ G2, but in either case we're allowed to consume the available -+ signal and should not block anymore. */ -+ if ((int)(signals - lowseq) >= 2) -+ { -+ /* Try to grab a signal. See above for MO. (if we do another loop -+ iteration we need to see the correct value of g1_start) */ -+ if (atomic_compare_exchange_weak_acquire ( -+ cond->__data.__g_signals + g, - &signals, signals - 2)) -- break; -- else -- continue; -- } -- /* No signals available after spinning, so prepare to block. -- We first acquire a group reference and use acquire MO for that so -- that we synchronize with the dummy read-modify-write in -- __condvar_quiesce_and_switch_g1 if we read from that. In turn, -- in this case this will make us see the advancement of __g_signals -- to the upcoming new g1_start that occurs with a concurrent -- attempt to reuse the group's slot. -- We use acquire MO for the __g_signals check to make the -- __g1_start check work (see spinning above). -- Note that the group reference acquisition will not mask the -- release MO when decrementing the reference count because we use -- an atomic read-modify-write operation and thus extend the release -- sequence. */ -- atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); -- -- // Now block. -- struct _pthread_cleanup_buffer buffer; -- struct _condvar_cleanup_buffer cbuffer; -- cbuffer.wseq = wseq; -- cbuffer.cond = cond; -- cbuffer.mutex = mutex; -- cbuffer.private = private; -- __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); -- -- err = __futex_abstimed_wait_cancelable64 ( -- cond->__data.__g_signals + g, signals, clockid, abstime, private); -- -- __pthread_cleanup_pop (&buffer, 0); -- -- if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) -- { -- __condvar_dec_grefs (cond, g, private); -- /* If we timed out, we effectively cancel waiting. Note that -- we have decremented __g_refs before cancellation, so that a -- deadlock between waiting for quiescence of our group in -- __condvar_quiesce_and_switch_g1 and us trying to acquire -- the lock during cancellation is not possible. */ -- __condvar_cancel_waiting (cond, seq, g, private); -- result = err; - break; -- } -- else -- __condvar_dec_grefs (cond, g, private); -- -+ else -+ continue; - } -+ // Now block. -+ struct _pthread_cleanup_buffer buffer; -+ struct _condvar_cleanup_buffer cbuffer; -+ cbuffer.wseq = wseq; -+ cbuffer.cond = cond; -+ cbuffer.mutex = mutex; -+ cbuffer.private = private; -+ __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); -+ -+ err = __futex_abstimed_wait_cancelable64 ( -+ cond->__data.__g_signals + g, signals, clockid, abstime, private); -+ -+ __pthread_cleanup_pop (&buffer, 0); -+ -+ if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) -+ { -+ /* If we timed out, we effectively cancel waiting. */ -+ __condvar_cancel_waiting (cond, seq, g, private); -+ result = err; -+ break; -+ } -+ } - - /* Confirm that we have been woken. We do that before acquiring the mutex - to allow for execution of pthread_cond_destroy while having acquired the --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-7.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-7.patch deleted file mode 100644 index a9e9cc7c48..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-7.patch +++ /dev/null @@ -1,160 +0,0 @@ -From 2a601ac9041e2ca645acad2c174b1c545cfceafe Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Tue, 17 Jun 2025 01:53:25 -0700 -Subject: [PATCH] nptl: rename __condvar_quiesce_and_switch_g1 - -This function no longer waits for threads to leave g1, so rename it to -__condvar_switch_g1 - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=4b79e27a5073c02f6bff9aa8f4791230a0ab1867] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_broadcast.c | 4 ++-- - nptl/pthread_cond_common.c | 26 ++++++++++++-------------- - nptl/pthread_cond_signal.c | 17 ++++++++--------- - nptl/pthread_cond_wait.c | 9 ++++----- - 4 files changed, 26 insertions(+), 30 deletions(-) - -diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c -index 5ae141ac81..a07435589a 100644 ---- a/nptl/pthread_cond_broadcast.c -+++ b/nptl/pthread_cond_broadcast.c -@@ -60,7 +60,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) - cond->__data.__g_size[g1] << 1); - cond->__data.__g_size[g1] = 0; - -- /* We need to wake G1 waiters before we quiesce G1 below. */ -+ /* We need to wake G1 waiters before we switch G1 below. */ - /* TODO Only set it if there are indeed futex waiters. We could - also try to move this out of the critical section in cases when - G2 is empty (and we don't need to quiesce). */ -@@ -69,7 +69,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) - - /* G1 is complete. Step (2) is next unless there are no waiters in G2, in - which case we can stop. */ -- if (__condvar_quiesce_and_switch_g1 (cond, wseq, &g1, private)) -+ if (__condvar_switch_g1 (cond, wseq, &g1, private)) - { - /* Step (3): Send signals to all waiters in the old G2 / new G1. */ - atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, -diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c -index f976a533a1..3baac4dabc 100644 ---- a/nptl/pthread_cond_common.c -+++ b/nptl/pthread_cond_common.c -@@ -189,16 +189,15 @@ __condvar_get_private (int flags) - return FUTEX_SHARED; - } - --/* This closes G1 (whose index is in G1INDEX), waits for all futex waiters to -- leave G1, converts G1 into a fresh G2, and then switches group roles so that -- the former G2 becomes the new G1 ending at the current __wseq value when we -- eventually make the switch (WSEQ is just an observation of __wseq by the -- signaler). -+/* This closes G1 (whose index is in G1INDEX), converts G1 into a fresh G2, -+ and then switches group roles so that the former G2 becomes the new G1 -+ ending at the current __wseq value when we eventually make the switch -+ (WSEQ is just an observation of __wseq by the signaler). - If G2 is empty, it will not switch groups because then it would create an - empty G1 which would require switching groups again on the next signal. - Returns false iff groups were not switched because G2 was empty. */ - static bool __attribute__ ((unused)) --__condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, -+__condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - unsigned int *g1index, int private) - { - unsigned int g1 = *g1index; -@@ -214,8 +213,7 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - + cond->__data.__g_size[g1 ^ 1]) == 0) - return false; - -- /* Now try to close and quiesce G1. We have to consider the following kinds -- of waiters: -+ /* We have to consider the following kinds of waiters: - * Waiters from less recent groups than G1 are not affected because - nothing will change for them apart from __g1_start getting larger. - * New waiters arriving concurrently with the group switching will all go -@@ -223,12 +221,12 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - are not affected. - * Waiters in G1 have already received a signal and been woken. */ - -- /* Update __g1_start, which finishes closing this group. The value we add -- will never be negative because old_orig_size can only be zero when we -- switch groups the first time after a condvar was initialized, in which -- case G1 will be at index 1 and we will add a value of 1. -- Relaxed MO is fine because the change comes with no additional -- constraints that others would have to observe. */ -+ /* Update __g1_start, which closes this group. The value we add will never -+ be negative because old_orig_size can only be zero when we switch groups -+ the first time after a condvar was initialized, in which case G1 will be -+ at index 1 and we will add a value of 1. Relaxed MO is fine because the -+ change comes with no additional constraints that others would have to -+ observe. */ - __condvar_add_g1_start_relaxed (cond, - (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); - -diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c -index 14800ba00b..a9bc10dcca 100644 ---- a/nptl/pthread_cond_signal.c -+++ b/nptl/pthread_cond_signal.c -@@ -69,18 +69,17 @@ ___pthread_cond_signal (pthread_cond_t *cond) - bool do_futex_wake = false; - - /* If G1 is still receiving signals, we put the signal there. If not, we -- check if G2 has waiters, and if so, quiesce and switch G1 to the former -- G2; if this results in a new G1 with waiters (G2 might have cancellations -- already, see __condvar_quiesce_and_switch_g1), we put the signal in the -- new G1. */ -+ check if G2 has waiters, and if so, switch G1 to the former G2; if this -+ results in a new G1 with waiters (G2 might have cancellations already, -+ see __condvar_switch_g1), we put the signal in the new G1. */ - if ((cond->__data.__g_size[g1] != 0) -- || __condvar_quiesce_and_switch_g1 (cond, wseq, &g1, private)) -+ || __condvar_switch_g1 (cond, wseq, &g1, private)) - { - /* Add a signal. Relaxed MO is fine because signaling does not need to -- establish a happens-before relation (see above). We do not mask the -- release-MO store when initializing a group in -- __condvar_quiesce_and_switch_g1 because we use an atomic -- read-modify-write and thus extend that store's release sequence. */ -+ establish a happens-before relation (see above). We do not mask the -+ release-MO store when initializing a group in __condvar_switch_g1 -+ because we use an atomic read-modify-write and thus extend that -+ store's release sequence. */ - atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 2); - cond->__data.__g_size[g1]--; - /* TODO Only set it if there are indeed futex waiters. */ -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index 104ebd48ca..bb46f3605d 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -382,8 +382,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - because we do not need to establish any happens-before relation with - signalers (see __pthread_cond_signal); modification order alone - establishes a total order of waiters/signals. We do need acquire MO -- to synchronize with group reinitialization in -- __condvar_quiesce_and_switch_g1. */ -+ to synchronize with group reinitialization in __condvar_switch_g1. */ - uint64_t wseq = __condvar_fetch_add_wseq_acquire (cond, 2); - /* Find our group's index. We always go into what was G2 when we acquired - our position. */ -@@ -414,9 +413,9 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - { - /* Now wait until a signal is available in our group or it is closed. - Acquire MO so that if we observe (signals == lowseq) after group -- switching in __condvar_quiesce_and_switch_g1, we synchronize with that -- store and will see the prior update of __g1_start done while switching -- groups too. */ -+ switching in __condvar_switch_g1, we synchronize with that store and -+ will see the prior update of __g1_start done while switching groups -+ too. */ - unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-8.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-8.patch deleted file mode 100644 index 8ea0c784ef..0000000000 --- a/meta/recipes-core/glibc/glibc/0026-PR25847-8.patch +++ /dev/null @@ -1,192 +0,0 @@ -From fc074de88796eb2036fbe9bade638e00adfd5cb2 Mon Sep 17 00:00:00 2001 -From: Malte Skarupke -Date: Tue, 17 Jun 2025 02:08:36 -0700 -Subject: [PATCH] nptl: Use all of g1_start and g_signals - -The LSB of g_signals was unused. The LSB of g1_start was used to indicate -which group is G2. This was used to always go to sleep in pthread_cond_wait -if a waiter is in G2. A comment earlier in the file says that this is not -correct to do: - - "Waiters cannot determine whether they are currently in G2 or G1 -- but they - do not have to because all they are interested in is whether there are - available signals" - -I either would have had to update the comment, or get rid of the check. I -chose to get rid of the check. In fact I don't quite know why it was there. -There will never be available signals for group G2, so we didn't need the -special case. Even if there were, this would just be a spurious wake. This -might have caught some cases where the count has wrapped around, but it -wouldn't reliably do that, (and even if it did, why would you want to force a -sleep in that case?) and we don't support that many concurrent waiters -anyway. Getting rid of it allows us to use one more bit, making us more -robust to wraparound. - -The following commits have been cherry-picked from Glibc master branch: -Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 - -Upstream-Status: Backport -[https://sourceware.org/git/?p=glibc.git;a=commit;h=91bb902f58264a2fd50fbce8f39a9a290dd23706] - -Signed-off-by: Sunil Dora ---- - nptl/pthread_cond_broadcast.c | 4 ++-- - nptl/pthread_cond_common.c | 26 ++++++++++---------------- - nptl/pthread_cond_signal.c | 2 +- - nptl/pthread_cond_wait.c | 14 +++++--------- - 4 files changed, 18 insertions(+), 28 deletions(-) - -diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c -index a07435589a..ef0943cdc5 100644 ---- a/nptl/pthread_cond_broadcast.c -+++ b/nptl/pthread_cond_broadcast.c -@@ -57,7 +57,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) - { - /* Add as many signals as the remaining size of the group. */ - atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, -- cond->__data.__g_size[g1] << 1); -+ cond->__data.__g_size[g1]); - cond->__data.__g_size[g1] = 0; - - /* We need to wake G1 waiters before we switch G1 below. */ -@@ -73,7 +73,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) - { - /* Step (3): Send signals to all waiters in the old G2 / new G1. */ - atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, -- cond->__data.__g_size[g1] << 1); -+ cond->__data.__g_size[g1]); - cond->__data.__g_size[g1] = 0; - /* TODO Only set it if there are indeed futex waiters. */ - do_futex_wake = true; -diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c -index 3baac4dabc..e48f914321 100644 ---- a/nptl/pthread_cond_common.c -+++ b/nptl/pthread_cond_common.c -@@ -208,9 +208,9 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - behavior. - Note that this works correctly for a zero-initialized condvar too. */ - unsigned int old_orig_size = __condvar_get_orig_size (cond); -- uint64_t old_g1_start = __condvar_load_g1_start_relaxed (cond) >> 1; -- if (((unsigned) (wseq - old_g1_start - old_orig_size) -- + cond->__data.__g_size[g1 ^ 1]) == 0) -+ uint64_t old_g1_start = __condvar_load_g1_start_relaxed (cond); -+ uint64_t new_g1_start = old_g1_start + old_orig_size; -+ if (((unsigned) (wseq - new_g1_start) + cond->__data.__g_size[g1 ^ 1]) == 0) - return false; - - /* We have to consider the following kinds of waiters: -@@ -221,16 +221,10 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - are not affected. - * Waiters in G1 have already received a signal and been woken. */ - -- /* Update __g1_start, which closes this group. The value we add will never -- be negative because old_orig_size can only be zero when we switch groups -- the first time after a condvar was initialized, in which case G1 will be -- at index 1 and we will add a value of 1. Relaxed MO is fine because the -- change comes with no additional constraints that others would have to -- observe. */ -- __condvar_add_g1_start_relaxed (cond, -- (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); -- -- unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; -+ /* Update __g1_start, which closes this group. Relaxed MO is fine because -+ the change comes with no additional constraints that others would have -+ to observe. */ -+ __condvar_add_g1_start_relaxed (cond, old_orig_size); - - /* At this point, the old G1 is now a valid new G2 (but not in use yet). - No old waiter can neither grab a signal nor acquire a reference without -@@ -242,13 +236,13 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, - g1 ^= 1; - *g1index ^= 1; - -- /* Now advance the new G1 g_signals to the new lowseq, giving it -+ /* Now advance the new G1 g_signals to the new g1_start, giving it - an effective signal count of 0 to start. */ -- atomic_store_release (cond->__data.__g_signals + g1, lowseq); -+ atomic_store_release (cond->__data.__g_signals + g1, (unsigned)new_g1_start); - - /* These values are just observed by signalers, and thus protected by the - lock. */ -- unsigned int orig_size = wseq - (old_g1_start + old_orig_size); -+ unsigned int orig_size = wseq - new_g1_start; - __condvar_set_orig_size (cond, orig_size); - /* Use and addition to not loose track of cancellations in what was - previously G2. */ -diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c -index a9bc10dcca..07427369aa 100644 ---- a/nptl/pthread_cond_signal.c -+++ b/nptl/pthread_cond_signal.c -@@ -80,7 +80,7 @@ ___pthread_cond_signal (pthread_cond_t *cond) - release-MO store when initializing a group in __condvar_switch_g1 - because we use an atomic read-modify-write and thus extend that - store's release sequence. */ -- atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 2); -+ atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 1); - cond->__data.__g_size[g1]--; - /* TODO Only set it if there are indeed futex waiters. */ - do_futex_wake = true; -diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c -index bb46f3605d..430cbe8a35 100644 ---- a/nptl/pthread_cond_wait.c -+++ b/nptl/pthread_cond_wait.c -@@ -84,7 +84,7 @@ __condvar_cancel_waiting (pthread_cond_t *cond, uint64_t seq, unsigned int g, - not hold a reference on the group. */ - __condvar_acquire_lock (cond, private); - -- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond) >> 1; -+ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - if (g1_start > seq) - { - /* Our group is closed, so someone provided enough signals for it. -@@ -278,7 +278,6 @@ __condvar_cleanup_waiting (void *arg) - * Waiters fetch-add while having acquire the mutex associated with the - condvar. Signalers load it and fetch-xor it concurrently. - __g1_start: Starting position of G1 (inclusive) -- * LSB is index of current G2. - * Modified by signalers while having acquired the condvar-internal lock - and observed concurrently by waiters. - __g1_orig_size: Initial size of G1 -@@ -299,11 +298,9 @@ __condvar_cleanup_waiting (void *arg) - * Reference count used by waiters concurrently with signalers that have - acquired the condvar-internal lock. - __g_signals: The number of signals that can still be consumed, relative to -- the current g1_start. (i.e. bits 31 to 1 of __g_signals are bits -- 31 to 1 of g1_start with the signal count added) -+ the current g1_start. (i.e. g1_start with the signal count added) - * Used as a futex word by waiters. Used concurrently by waiters and - signalers. -- * LSB is currently reserved and 0. - __g_size: Waiters remaining in this group (i.e., which have not been - signaled yet. - * Accessed by signalers and waiters that cancel waiting (both do so only -@@ -418,9 +415,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - too. */ - unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); -- unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - -- if (seq < (g1_start >> 1)) -+ if (seq < g1_start) - { - /* If the group is closed already, - then this waiter originally had enough extra signals to -@@ -433,13 +429,13 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, - by now, perhaps in the process of switching back to an older - G2, but in either case we're allowed to consume the available - signal and should not block anymore. */ -- if ((int)(signals - lowseq) >= 2) -+ if ((int)(signals - (unsigned int)g1_start) > 0) - { - /* Try to grab a signal. See above for MO. (if we do another loop - iteration we need to see the correct value of g1_start) */ - if (atomic_compare_exchange_weak_acquire ( - cond->__data.__g_signals + g, -- &signals, signals - 2)) -+ &signals, signals - 1)) - break; - else - continue; --- -2.49.0 - diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 265dcb9129..ca7f630699 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -62,14 +62,6 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0022-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch \ file://0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \ file://0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch \ - file://0026-PR25847-1.patch \ - file://0026-PR25847-2.patch \ - file://0026-PR25847-3.patch \ - file://0026-PR25847-4.patch \ - file://0026-PR25847-5.patch \ - file://0026-PR25847-6.patch \ - file://0026-PR25847-7.patch \ - file://0026-PR25847-8.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F4F8CCD18E for ; Tue, 14 Oct 2025 22:45:07 +0000 (UTC) Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by mx.groups.io with SMTP id smtpd.web10.2511.1760481906786036902 for ; Tue, 14 Oct 2025 15:45:06 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=xWBtexad; spf=softfail (domain: sakoman.com, ip: 209.85.214.181, mailfrom: steve@sakoman.com) Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-27ee41e0798so93651015ad.1 for ; Tue, 14 Oct 2025 15:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481906; x=1761086706; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=wEAPMSu2QlIxOIAqGMW+CTdsAOYIIZ2HY70MJxH6CQY=; b=xWBtexadzrYJ4rc2mMEApCMD/i8Qu/RO6ZULxWstyfx/LTXHG9RIYWU0H39HaYk4U0 RtvSacc0MGXpmnj/Sl50WuzTBSpsHiLyKLkv9p/5dcdkJMyq/yRc1z6BQJTS+Npq112V cOwjBgXoh11d4uYsEm++loi4+sHQMJJRpX0xJFUOVPRjR+lLLR1cmBxyug03HW9C7wJs hGWyK1e9EiCV7TggvhRqrEyoicT3jhcSXqXgJGwo4BGOVA9pO35m/l058mbNVLtgnDTQ xaf40k9W8ILWW7g4FcK2z9Ma4ON+5BGZOLtxfiCXR28qLV6nI06uq6Usnng128DYqynC BN3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481906; x=1761086706; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wEAPMSu2QlIxOIAqGMW+CTdsAOYIIZ2HY70MJxH6CQY=; b=QjgzSCbk9eId2+oOg/yxHPnlc1UyDPNc50qAwGWL6FicBC1PXNOIRVx2J5X4Ahw43o wG/PaYU18BFc/TZg8NqzHVf1mJ5+0zlmqi9seSoGfB7FYnEZN6Mml6CeuJfU4RUFlrlU 9Aqm3qq/tVpqGVCIwkt2NMThmGZUDtLNPMSRwTCpAw53M+xOPDajkwiEEvWaAdJRTpLU v4eNWY7GhVYNlWJhCYKqHoQNtLgWKnt2WuFAVWP47gEGc0ne74YG+VlCzyoyDYXHLJZJ IKibndyHO8+xbFvwCC1fO/pLGhCg3aPpq4WuoQXrwmVCJ+TvQP1N0Sf4Sp28BKYEduZn QYHA== X-Gm-Message-State: AOJu0Yw0EjrnxPLqoUnSAaP4x4HGI/UOiPrQ2qIRfuVwvjRALBdyyr0K D/GmUnuJ0I76trr17jacm8NoB47M0YfEBOBHwgiYPeoMYZUYXamtIqY3KoHkZ5QpfQKEFlyZ1in LpU+m X-Gm-Gg: ASbGncuBuCgCN9qADX5UDieMHOeGkYpy+1funvOkT5wxAdsyWFthbTZ17Uy2uZJS2L7 FcDxZ87/GRv5DEdFornAa2rDY4H8jWdMalOwLhUBiF4OvDvsV7m3RL5eTnJACOBQgQe51KMnhHt rg/OdmyYz9MNF4iQcnEG7UJOTJaWAMKqeXU8bVsxfE/r6MGrV3EPTX/256W/Vir2mL29eiLyTFa myeeWeFoBziTUl9FUcZv/qsP3RJVtRkhRUAkUdZfw8+++LXvI8nbSEk6J/m3ojTjAwAvHd8iV1V aCH0MgVsujkMZBoYkHPk0rbtKIrXN4c+Vl93kfQeq4tOKjns93VT1/rS+97huvMpM7SZguzmanw blXbVBFPkx/O7YROGvlpUfcfAIcEchxcBlDndl49fiKg= X-Google-Smtp-Source: AGHT+IGU8V5N0oQuWWXi6sk36W/eosfZEgatKoP29K4awj/zoaRVWrZFDiaibATllFHywFCmcYbUCA== X-Received: by 2002:a17:902:fc46:b0:27e:ef12:6e94 with SMTP id d9443c01a7336-29027418f97mr332830145ad.55.1760481905431; Tue, 14 Oct 2025 15:45:05 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:05 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 05/14] glibc: pthreads NPTL lost wakeup fix 2 Date: Tue, 14 Oct 2025 15:44:42 -0700 Message-ID: <4d57f7c82ccb64e2bd2a2371ef18bdc5a4b718e3.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:07 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224859 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=1db84775f831a1494993ce9c118deaf9537cc50a [2] https://sourceware.org/pipermail/libc-stable/2025-July/002277.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-1.patch | 455 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 456 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-1.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-1.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-1.patch new file mode 100644 index 0000000000..0f81f5b40b --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-1.patch @@ -0,0 +1,455 @@ +From 0402999b82f697011de388f61bad68da26060bef Mon Sep 17 00:00:00 2001 +From: Frank Barrus +Date: Tue, 14 Oct 2025 03:55:17 -0700 +Subject: [PATCH] pthreads NPTL: lost wakeup fix 2 + +This fixes the lost wakeup (from a bug in signal stealing) with a change +in the usage of g_signals[] in the condition variable internal state. +It also completely eliminates the concept and handling of signal stealing, +as well as the need for signalers to block to wait for waiters to wake +up every time there is a G1/G2 switch. This greatly reduces the average +and maximum latency for pthread_cond_signal. + +The g_signals[] field now contains a signal count that is relative to +the current g1_start value. Since it is a 32-bit field, and the LSB is +still reserved (though not currently used anymore), it has a 31-bit value +that corresponds to the low 31 bits of the sequence number in g1_start. +(since g1_start also has an LSB flag, this means bits 31:1 in g_signals +correspond to bits 31:1 in g1_start, plus the current signal count) + +By making the signal count relative to g1_start, there is no longer +any ambiguity or A/B/A issue, and thus any checks before blocking, +including the futex call itself, are guaranteed not to block if the G1/G2 +switch occurs, even if the signal count remains the same. This allows +initially safely blocking in G2 until the switch to G1 occurs, and +then transitioning from G1 to a new G1 or G2, and always being able to +distinguish the state change. This removes the race condition and A/B/A +problems that otherwise ocurred if a late (pre-empted) waiter were to +resume just as the futex call attempted to block on g_signal since +otherwise there was no last opportunity to re-check things like whether +the current G1 group was already closed. + +By fixing these issues, the signal stealing code can be eliminated, +since there is no concept of signal stealing anymore. The code to block +for all waiters to exit g_refs can also be removed, since any waiters +that are still in the g_refs region can be guaranteed to safely wake +up and exit. If there are still any left at this time, they are all +sent one final futex wakeup to ensure that they are not blocked any +longer, but there is no need for the signaller to block and wait for +them to wake up and exit the g_refs region. + +The signal count is then effectively "zeroed" but since it is now +relative to g1_start, this is done by advancing it to a new value that +can be observed by any pending blocking waiters. Any late waiters can +always tell the difference, and can thus just cleanly exit if they are +in a stale G1 or G2. They can never steal a signal from the current +G1 if they are not in the current G1, since the signal value that has +to match in the cmpxchg has the low 31 bits of the g1_start value +contained in it, and that's first checked, and then it won't match if +there's a G1/G2 change. + +Note: the 31-bit sequence number used in g_signals is designed to +handle wrap-around when checking the signal count, but if the entire +31-bit wraparound (2 billion signals) occurs while there is still a +late waiter that has not yet resumed, and it happens to then match +the current g1_start low bits, and the pre-emption occurs after the +normal "closed group" checks (which are 64-bit) but then hits the +futex syscall and signal consuming code, then an A/B/A issue could +still result and cause an incorrect assumption about whether it +should block. This particular scenario seems unlikely in practice. +Note that once awake from the futex, the waiter would notice the +closed group before consuming the signal (since that's still a 64-bit +check that would not be aliased in the wrap-around in g_signals), +so the biggest impact would be blocking on the futex until the next +full wakeup from a G1/G2 switch. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +Commit : 1db84775f831a1494993ce9c118deaf9537cc50a + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002277.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_common.c | 105 +++++++++------------------ + nptl/pthread_cond_wait.c | 144 ++++++++++++------------------------- + 2 files changed, 81 insertions(+), 168 deletions(-) + +diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c +index fb035f72..a55eee3e 100644 +--- a/nptl/pthread_cond_common.c ++++ b/nptl/pthread_cond_common.c +@@ -201,7 +201,6 @@ static bool __attribute__ ((unused)) + __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + unsigned int *g1index, int private) + { +- const unsigned int maxspin = 0; + unsigned int g1 = *g1index; + + /* If there is no waiter in G2, we don't do anything. The expression may +@@ -222,84 +221,46 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + * New waiters arriving concurrently with the group switching will all go + into G2 until we atomically make the switch. Waiters existing in G2 + are not affected. +- * Waiters in G1 will be closed out immediately by setting a flag in +- __g_signals, which will prevent waiters from blocking using a futex on +- __g_signals and also notifies them that the group is closed. As a +- result, they will eventually remove their group reference, allowing us +- to close switch group roles. */ +- +- /* First, set the closed flag on __g_signals. This tells waiters that are +- about to wait that they shouldn't do that anymore. This basically +- serves as an advance notificaton of the upcoming change to __g1_start; +- waiters interpret it as if __g1_start was larger than their waiter +- sequence position. This allows us to change __g1_start after waiting +- for all existing waiters with group references to leave, which in turn +- makes recovery after stealing a signal simpler because it then can be +- skipped if __g1_start indicates that the group is closed (otherwise, +- we would have to recover always because waiters don't know how big their +- groups are). Relaxed MO is fine. */ +- atomic_fetch_or_relaxed (cond->__data.__g_signals + g1, 1); +- +- /* Wait until there are no group references anymore. The fetch-or operation +- injects us into the modification order of __g_refs; release MO ensures +- that waiters incrementing __g_refs after our fetch-or see the previous +- changes to __g_signals and to __g1_start that had to happen before we can +- switch this G1 and alias with an older group (we have two groups, so +- aliasing requires switching group roles twice). Note that nobody else +- can have set the wake-request flag, so we do not have to act upon it. +- +- Also note that it is harmless if older waiters or waiters from this G1 +- get a group reference after we have quiesced the group because it will +- remain closed for them either because of the closed flag in __g_signals +- or the later update to __g1_start. New waiters will never arrive here +- but instead continue to go into the still current G2. */ +- unsigned r = atomic_fetch_or_release (cond->__data.__g_refs + g1, 0); +- while ((r >> 1) > 0) +- { +- for (unsigned int spin = maxspin; ((r >> 1) > 0) && (spin > 0); spin--) +- { +- /* TODO Back off. */ +- r = atomic_load_relaxed (cond->__data.__g_refs + g1); +- } +- if ((r >> 1) > 0) +- { +- /* There is still a waiter after spinning. Set the wake-request +- flag and block. Relaxed MO is fine because this is just about +- this futex word. +- +- Update r to include the set wake-request flag so that the upcoming +- futex_wait only blocks if the flag is still set (otherwise, we'd +- violate the basic client-side futex protocol). */ +- r = atomic_fetch_or_relaxed (cond->__data.__g_refs + g1, 1) | 1; +- +- if ((r >> 1) > 0) +- futex_wait_simple (cond->__data.__g_refs + g1, r, private); +- /* Reload here so we eventually see the most recent value even if we +- do not spin. */ +- r = atomic_load_relaxed (cond->__data.__g_refs + g1); +- } +- } +- /* Acquire MO so that we synchronize with the release operation that waiters +- use to decrement __g_refs and thus happen after the waiters we waited +- for. */ +- atomic_thread_fence_acquire (); ++ * Waiters in G1 will be closed out immediately by the advancing of ++ __g_signals to the next "lowseq" (low 31 bits of the new g1_start), ++ which will prevent waiters from blocking using a futex on ++ __g_signals since it provides enough signals for all possible ++ remaining waiters. As a result, they can each consume a signal ++ and they will eventually remove their group reference. */ + + /* Update __g1_start, which finishes closing this group. The value we add + will never be negative because old_orig_size can only be zero when we + switch groups the first time after a condvar was initialized, in which +- case G1 will be at index 1 and we will add a value of 1. See above for +- why this takes place after waiting for quiescence of the group. ++ case G1 will be at index 1 and we will add a value of 1. + Relaxed MO is fine because the change comes with no additional + constraints that others would have to observe. */ + __condvar_add_g1_start_relaxed (cond, + (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); + +- /* Now reopen the group, thus enabling waiters to again block using the +- futex controlled by __g_signals. Release MO so that observers that see +- no signals (and thus can block) also see the write __g1_start and thus +- that this is now a new group (see __pthread_cond_wait_common for the +- matching acquire MO loads). */ +- atomic_store_release (cond->__data.__g_signals + g1, 0); ++ unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; ++ ++ /* If any waiters still hold group references (and thus could be blocked), ++ then wake them all up now and prevent any running ones from blocking. ++ This is effectively a catch-all for any possible current or future ++ bugs that can allow the group size to reach 0 before all G1 waiters ++ have been awakened or at least given signals to consume, or any ++ other case that can leave blocked (or about to block) older waiters.. */ ++ if ((atomic_fetch_or_release (cond->__data.__g_refs + g1, 0) >> 1) > 0) ++ { ++ /* First advance signals to the end of the group (i.e. enough signals ++ for the entire G1 group) to ensure that waiters which have not ++ yet blocked in the futex will not block. ++ Note that in the vast majority of cases, this should never ++ actually be necessary, since __g_signals will have enough ++ signals for the remaining g_refs waiters. As an optimization, ++ we could check this first before proceeding, although that ++ could still leave the potential for futex lost wakeup bugs ++ if the signal count was non-zero but the futex wakeup ++ was somehow lost. */ ++ atomic_store_release (cond->__data.__g_signals + g1, lowseq); ++ ++ futex_wake (cond->__data.__g_signals + g1, INT_MAX, private); ++ } + + /* At this point, the old G1 is now a valid new G2 (but not in use yet). + No old waiter can neither grab a signal nor acquire a reference without +@@ -311,6 +272,10 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + g1 ^= 1; + *g1index ^= 1; + ++ /* Now advance the new G1 g_signals to the new lowseq, giving it ++ an effective signal count of 0 to start. */ ++ atomic_store_release (cond->__data.__g_signals + g1, lowseq); ++ + /* These values are just observed by signalers, and thus protected by the + lock. */ + unsigned int orig_size = wseq - (old_g1_start + old_orig_size); +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index 20c348a5..1cb3dbf7 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -238,9 +238,7 @@ __condvar_cleanup_waiting (void *arg) + signaled), and a reference count. + + The group reference count is used to maintain the number of waiters that +- are using the group's futex. Before a group can change its role, the +- reference count must show that no waiters are using the futex anymore; this +- prevents ABA issues on the futex word. ++ are using the group's futex. + + To represent which intervals in the waiter sequence the groups cover (and + thus also which group slot contains G1 or G2), we use a 64b counter to +@@ -300,11 +298,12 @@ __condvar_cleanup_waiting (void *arg) + last reference. + * Reference count used by waiters concurrently with signalers that have + acquired the condvar-internal lock. +- __g_signals: The number of signals that can still be consumed. ++ __g_signals: The number of signals that can still be consumed, relative to ++ the current g1_start. (i.e. bits 31 to 1 of __g_signals are bits ++ 31 to 1 of g1_start with the signal count added) + * Used as a futex word by waiters. Used concurrently by waiters and + signalers. +- * LSB is true iff this group has been completely signaled (i.e., it is +- closed). ++ * LSB is currently reserved and 0. + __g_size: Waiters remaining in this group (i.e., which have not been + signaled yet. + * Accessed by signalers and waiters that cancel waiting (both do so only +@@ -328,18 +327,6 @@ __condvar_cleanup_waiting (void *arg) + sufficient because if a waiter can see a sufficiently large value, it could + have also consume a signal in the waiters group. + +- Waiters try to grab a signal from __g_signals without holding a reference +- count, which can lead to stealing a signal from a more recent group after +- their own group was already closed. They cannot always detect whether they +- in fact did because they do not know when they stole, but they can +- conservatively add a signal back to the group they stole from; if they +- did so unnecessarily, all that happens is a spurious wake-up. To make this +- even less likely, __g1_start contains the index of the current g2 too, +- which allows waiters to check if there aliasing on the group slots; if +- there wasn't, they didn't steal from the current G1, which means that the +- G1 they stole from must have been already closed and they do not need to +- fix anything. +- + It is essential that the last field in pthread_cond_t is __g_signals[1]: + The previous condvar used a pointer-sized field in pthread_cond_t, so a + PTHREAD_COND_INITIALIZER from that condvar implementation might only +@@ -435,6 +422,9 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + { + while (1) + { ++ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); ++ unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; ++ + /* Spin-wait first. + Note that spinning first without checking whether a timeout + passed might lead to what looks like a spurious wake-up even +@@ -446,35 +436,45 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + having to compare against the current time seems to be the right + choice from a performance perspective for most use cases. */ + unsigned int spin = maxspin; +- while (signals == 0 && spin > 0) ++ while (spin > 0 && ((int)(signals - lowseq) < 2)) + { + /* Check that we are not spinning on a group that's already + closed. */ +- if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) +- goto done; ++ if (seq < (g1_start >> 1)) ++ break; + + /* TODO Back off. */ + + /* Reload signals. See above for MO. */ + signals = atomic_load_acquire (cond->__data.__g_signals + g); ++ g1_start = __condvar_load_g1_start_relaxed (cond); ++ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + spin--; + } + +- /* If our group will be closed as indicated by the flag on signals, +- don't bother grabbing a signal. */ +- if (signals & 1) +- goto done; +- +- /* If there is an available signal, don't block. */ +- if (signals != 0) ++ if (seq < (g1_start >> 1)) ++ { ++ /* If the group is closed already, ++ then this waiter originally had enough extra signals to ++ consume, up until the time its group was closed. */ ++ goto done; ++ } ++ ++ /* If there is an available signal, don't block. ++ If __g1_start has advanced at all, then we must be in G1 ++ by now, perhaps in the process of switching back to an older ++ G2, but in either case we're allowed to consume the available ++ signal and should not block anymore. */ ++ if ((int)(signals - lowseq) >= 2) + break; + + /* No signals available after spinning, so prepare to block. + We first acquire a group reference and use acquire MO for that so + that we synchronize with the dummy read-modify-write in + __condvar_quiesce_and_switch_g1 if we read from that. In turn, +- in this case this will make us see the closed flag on __g_signals +- that designates a concurrent attempt to reuse the group's slot. ++ in this case this will make us see the advancement of __g_signals ++ to the upcoming new g1_start that occurs with a concurrent ++ attempt to reuse the group's slot. + We use acquire MO for the __g_signals check to make the + __g1_start check work (see spinning above). + Note that the group reference acquisition will not mask the +@@ -482,15 +482,24 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + an atomic read-modify-write operation and thus extend the release + sequence. */ + atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); +- if (((atomic_load_acquire (cond->__data.__g_signals + g) & 1) != 0) +- || (seq < (__condvar_load_g1_start_relaxed (cond) >> 1))) ++ signals = atomic_load_acquire (cond->__data.__g_signals + g); ++ g1_start = __condvar_load_g1_start_relaxed (cond); ++ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; ++ ++ if (seq < (g1_start >> 1)) + { +- /* Our group is closed. Wake up any signalers that might be +- waiting. */ ++ /* group is closed already, so don't block */ + __condvar_dec_grefs (cond, g, private); + goto done; + } + ++ if ((int)(signals - lowseq) >= 2) ++ { ++ /* a signal showed up or G1/G2 switched after we grabbed the refcount */ ++ __condvar_dec_grefs (cond, g, private); ++ break; ++ } ++ + // Now block. + struct _pthread_cleanup_buffer buffer; + struct _condvar_cleanup_buffer cbuffer; +@@ -501,7 +510,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); + + err = __futex_abstimed_wait_cancelable64 ( +- cond->__data.__g_signals + g, 0, clockid, abstime, private); ++ cond->__data.__g_signals + g, signals, clockid, abstime, private); + + __pthread_cleanup_pop (&buffer, 0); + +@@ -524,6 +533,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + signals = atomic_load_acquire (cond->__data.__g_signals + g); + } + ++ if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) ++ goto done; + } + /* Try to grab a signal. Use acquire MO so that we see an up-to-date value + of __g1_start below (see spinning above for a similar case). In +@@ -532,69 +543,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, + &signals, signals - 2)); + +- /* We consumed a signal but we could have consumed from a more recent group +- that aliased with ours due to being in the same group slot. If this +- might be the case our group must be closed as visible through +- __g1_start. */ +- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); +- if (seq < (g1_start >> 1)) +- { +- /* We potentially stole a signal from a more recent group but we do not +- know which group we really consumed from. +- We do not care about groups older than current G1 because they are +- closed; we could have stolen from these, but then we just add a +- spurious wake-up for the current groups. +- We will never steal a signal from current G2 that was really intended +- for G2 because G2 never receives signals (until it becomes G1). We +- could have stolen a signal from G2 that was conservatively added by a +- previous waiter that also thought it stole a signal -- but given that +- that signal was added unnecessarily, it's not a problem if we steal +- it. +- Thus, the remaining case is that we could have stolen from the current +- G1, where "current" means the __g1_start value we observed. However, +- if the current G1 does not have the same slot index as we do, we did +- not steal from it and do not need to undo that. This is the reason +- for putting a bit with G2's index into__g1_start as well. */ +- if (((g1_start & 1) ^ 1) == g) +- { +- /* We have to conservatively undo our potential mistake of stealing +- a signal. We can stop trying to do that when the current G1 +- changes because other spinning waiters will notice this too and +- __condvar_quiesce_and_switch_g1 has checked that there are no +- futex waiters anymore before switching G1. +- Relaxed MO is fine for the __g1_start load because we need to +- merely be able to observe this fact and not have to observe +- something else as well. +- ??? Would it help to spin for a little while to see whether the +- current G1 gets closed? This might be worthwhile if the group is +- small or close to being closed. */ +- unsigned int s = atomic_load_relaxed (cond->__data.__g_signals + g); +- while (__condvar_load_g1_start_relaxed (cond) == g1_start) +- { +- /* Try to add a signal. We don't need to acquire the lock +- because at worst we can cause a spurious wake-up. If the +- group is in the process of being closed (LSB is true), this +- has an effect similar to us adding a signal. */ +- if (((s & 1) != 0) +- || atomic_compare_exchange_weak_relaxed +- (cond->__data.__g_signals + g, &s, s + 2)) +- { +- /* If we added a signal, we also need to add a wake-up on +- the futex. We also need to do that if we skipped adding +- a signal because the group is being closed because +- while __condvar_quiesce_and_switch_g1 could have closed +- the group, it might stil be waiting for futex waiters to +- leave (and one of those waiters might be the one we stole +- the signal from, which cause it to block using the +- futex). */ +- futex_wake (cond->__data.__g_signals + g, 1, private); +- break; +- } +- /* TODO Back off. */ +- } +- } +- } +- + done: + + /* Confirm that we have been woken. We do that before acquiring the mutex +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index ca7f630699..15179899d4 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -62,6 +62,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0022-sysdeps-gnu-configure.ac-Set-libc_cv_rootsbindir-onl.patch \ file://0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \ file://0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch \ + file://0026-PR25847-1.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9739FCCD190 for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by mx.groups.io with SMTP id smtpd.web10.2516.1760481907956994138 for ; Tue, 14 Oct 2025 15:45:08 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=t+JD1gBP; spf=softfail (domain: sakoman.com, ip: 209.85.215.179, mailfrom: steve@sakoman.com) Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-b679450ecb6so3181820a12.2 for ; Tue, 14 Oct 2025 15:45:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481907; x=1761086707; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=BTn+1TAubpA/0VMcd+9uMUhVDl7wkhf58OS5EPzyWzk=; b=t+JD1gBPCQAfrale/9D9p171PvUyxuG/obGiy9OvKsVjVq6H6wGdkyUkKiC9n7DZVE T25HOvARlhFgQzPD6YkZmZsrbdArnT9px+LUJSYeO+A415O0ksSum8DV/sizWrTtqye5 wFnE1O2MVIRrpyizCoQ89nE80+KgBui0skGxqAZez9Dmq7HyxtksDy/2+gepI0RXA/nh iRqGYcwIppstDVMu78bmFgBywbpL+1pxkvUTf5n8YCd5g0vvOjdGG0kZUuENILpphX38 TBXaYJcVzaXKexUkJqepAms/ZfM+dgOlZ3yHbnO8UsXcvr51RzOZrSD5bX1pBvuuWWOb wt1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481907; x=1761086707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BTn+1TAubpA/0VMcd+9uMUhVDl7wkhf58OS5EPzyWzk=; b=hCpL9Psp/1M1WgBCH8axWJR0uHZqJqp2szNvHxUsZKIoWe4TKCduRIoRokgwv5rfpg UnCer3jO9d/wYoyYzGUyqAPqubWZRrtCjqsMKfIcFNT+y0uglqtcYIh1oIa/KJ5JyWZ1 rE1YHkWjD+n7mgkaSi1c6CbqCeKbqrZalNWCVjiTc3m9YxpHLk/1khAzcc1rL+GAffij LaMJSEFKOQeSWRvWypOzqbtZm0VZXQpGPxqjSoA7e8cnxIwShQhLARRF11g1VltHssUk Ekx8NDYfuTg+0lefl5f/3zwh5mCEUMzj3cSljYBOyRQekMuYHlDcB1FV46tUA3daQ4Nr VkNA== X-Gm-Message-State: AOJu0Yy8sQd8A7+r3lmkD04mOX2AkgGOtP995tE4qeYBoR3bhPWcorHc Aj06b4uQLgr5zQVxcMKDPf2dNIO0LUezEL7snt+zhFzHiLCF9mMjDr2AoeNob2yzsY2wIJuniie NuPMo X-Gm-Gg: ASbGncvM3Sp6iom354ZQjjSzKX6bPxWWlhPhcTc1A/6lURwsy5L8MWegrzmK6ZK3NL5 U0TwbwGIX9g6o+uXTknQfc57E9FHpiWYJMUQhSjePPVAAT+LeFC85yF6Dz5EOTHrLH2KHwhsnRI qwS3/2TDsgDUkywKlTt1AnaA3ML9OlvPhcXWhj6Uc++9MkE9mDVUhKepQCsJY3z/8HQsV3efhRF 9wi6U1BGs6ILOVVf4uHjkpJIYLREbOnkhetMpCmSLGr0s9H0cSFm3atbWc3vMDbYPzHdOCJ+XGq wCQz7WXw5BfT4PC0CHST8FLVCBRgZi/RTEMmdBnQf96kWdmPPGZAOvqriNhGni6XdtvfM+ObvUI OVAGvS94Xe+/mhJ7E2LzhnmNfTljTL/SzUGXVXEjTgnA= X-Google-Smtp-Source: AGHT+IH9yDHkrZyov68bEnBpez3tN9XBgpFCFdrM3byvUExikVJQ/egZZG6laWTp1nV2R1W7WWDMrg== X-Received: by 2002:a17:903:2c03:b0:24c:d6c6:c656 with SMTP id d9443c01a7336-290272135a1mr275888705ad.4.1760481907069; Tue, 14 Oct 2025 15:45:07 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:06 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 06/14] glibc: nptl Update comments and indentation for new condvar implementation Date: Tue, 14 Oct 2025 15:44:43 -0700 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224860 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=0cc973160c23bb67f895bc887dd6942d29f8fee3 [2] https://sourceware.org/pipermail/libc-stable/2025-July/002275.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-2.patch | 145 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 146 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-2.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-2.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-2.patch new file mode 100644 index 0000000000..a9ed702170 --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-2.patch @@ -0,0 +1,145 @@ +From 306ea7810f5f6709ef3942a7be75077203b5d201 Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 04:27:19 -0700 +Subject: [PATCH] nptl: Update comments and indentation for new condvar + implementation + +Some comments were wrong after the most recent commit. This fixes that. +Also fixing indentation where it was using spaces instead of tabs. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: 0cc973160c23bb67f895bc887dd6942d29f8fee3 + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002275.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_common.c | 5 +++-- + nptl/pthread_cond_wait.c | 39 +++++++++++++++++++------------------- + 2 files changed, 22 insertions(+), 22 deletions(-) + +diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c +index a55eee3e..350a16fa 100644 +--- a/nptl/pthread_cond_common.c ++++ b/nptl/pthread_cond_common.c +@@ -221,8 +221,9 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + * New waiters arriving concurrently with the group switching will all go + into G2 until we atomically make the switch. Waiters existing in G2 + are not affected. +- * Waiters in G1 will be closed out immediately by the advancing of +- __g_signals to the next "lowseq" (low 31 bits of the new g1_start), ++ * Waiters in G1 have already received a signal and been woken. If they ++ haven't woken yet, they will be closed out immediately by the advancing ++ of __g_signals to the next "lowseq" (low 31 bits of the new g1_start), + which will prevent waiters from blocking using a futex on + __g_signals since it provides enough signals for all possible + remaining waiters. As a result, they can each consume a signal +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index 1cb3dbf7..cee19687 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -249,7 +249,7 @@ __condvar_cleanup_waiting (void *arg) + figure out whether they are in a group that has already been completely + signaled (i.e., if the current G1 starts at a later position that the + waiter's position). Waiters cannot determine whether they are currently +- in G2 or G1 -- but they do not have too because all they are interested in ++ in G2 or G1 -- but they do not have to because all they are interested in + is whether there are available signals, and they always start in G2 (whose + group slot they know because of the bit in the waiter sequence. Signalers + will simply fill the right group until it is completely signaled and can +@@ -412,7 +412,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + } + + /* Now wait until a signal is available in our group or it is closed. +- Acquire MO so that if we observe a value of zero written after group ++ Acquire MO so that if we observe (signals == lowseq) after group + switching in __condvar_quiesce_and_switch_g1, we synchronize with that + store and will see the prior update of __g1_start done while switching + groups too. */ +@@ -422,8 +422,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + { + while (1) + { +- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); +- unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; ++ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); ++ unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + + /* Spin-wait first. + Note that spinning first without checking whether a timeout +@@ -447,21 +447,21 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + + /* Reload signals. See above for MO. */ + signals = atomic_load_acquire (cond->__data.__g_signals + g); +- g1_start = __condvar_load_g1_start_relaxed (cond); +- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; ++ g1_start = __condvar_load_g1_start_relaxed (cond); ++ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + spin--; + } + +- if (seq < (g1_start >> 1)) ++ if (seq < (g1_start >> 1)) + { +- /* If the group is closed already, ++ /* If the group is closed already, + then this waiter originally had enough extra signals to + consume, up until the time its group was closed. */ + goto done; +- } ++ } + + /* If there is an available signal, don't block. +- If __g1_start has advanced at all, then we must be in G1 ++ If __g1_start has advanced at all, then we must be in G1 + by now, perhaps in the process of switching back to an older + G2, but in either case we're allowed to consume the available + signal and should not block anymore. */ +@@ -483,22 +483,23 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + sequence. */ + atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); + signals = atomic_load_acquire (cond->__data.__g_signals + g); +- g1_start = __condvar_load_g1_start_relaxed (cond); +- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; ++ g1_start = __condvar_load_g1_start_relaxed (cond); ++ lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + +- if (seq < (g1_start >> 1)) ++ if (seq < (g1_start >> 1)) + { +- /* group is closed already, so don't block */ ++ /* group is closed already, so don't block */ + __condvar_dec_grefs (cond, g, private); + goto done; + } + + if ((int)(signals - lowseq) >= 2) + { +- /* a signal showed up or G1/G2 switched after we grabbed the refcount */ ++ /* a signal showed up or G1/G2 switched after we grabbed the ++ refcount */ + __condvar_dec_grefs (cond, g, private); + break; +- } ++ } + + // Now block. + struct _pthread_cleanup_buffer buffer; +@@ -536,10 +537,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) + goto done; + } +- /* Try to grab a signal. Use acquire MO so that we see an up-to-date value +- of __g1_start below (see spinning above for a similar case). In +- particular, if we steal from a more recent group, we will also see a +- more recent __g1_start below. */ ++ /* Try to grab a signal. See above for MO. (if we do another loop ++ iteration we need to see the correct value of g1_start) */ + while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, + &signals, signals - 2)); + +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 15179899d4..732cb96f94 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -63,6 +63,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0023-timezone-Make-shell-interpreter-overridable-in-tzsel.patch \ file://0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch \ file://0026-PR25847-1.patch \ + file://0026-PR25847-2.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD77FCCD198 for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by mx.groups.io with SMTP id smtpd.web11.2576.1760481909523206385 for ; Tue, 14 Oct 2025 15:45:09 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=HlOO1wVw; spf=softfail (domain: sakoman.com, ip: 209.85.215.171, mailfrom: steve@sakoman.com) Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-b6271ea39f4so4030095a12.3 for ; Tue, 14 Oct 2025 15:45:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481909; x=1761086709; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=P4W0NFmClxkHVtp0MzCXxfiXohoZUzlhJuhbz4h977U=; b=HlOO1wVwRsLMzuWkMmjZ/XXD8A0QORVWjSkI4R+emgwrpSZ1FTawX2dvwJPMLmraLp 1zkGigs5rhtxGtWBo48mKBkD+o0kQEqIk1Cfh0Wg3I+JX3VwqTABZOKyyUP0E7m88nwa VczrXGvF9mRb9+5Q550vI+PURPatAs0BPdJiKGovF0sZwKhMi5vjWPB0srAoqSvh0iZA gvA3k/If5nlCcnvabeVoO31SK0APzUOEKfXV5CdLCwYPShimOdoAmldp51dHq07nxNwE 1m6axF9xfF1qspwqKg8PKAdDVw+aOlcPvlbVp7p/FE6/KmnvqApvVmraj1G4/R6moDhZ 9yGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481909; x=1761086709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P4W0NFmClxkHVtp0MzCXxfiXohoZUzlhJuhbz4h977U=; b=UifVKXpagB7MZAXQQ8aEdt6bzTPimCi1Ajt9laBlZvEUAEq7nvxJJYBcC87T28TUWd fPuPCvvL2WJimBygj6bXvYLmA364zsdxluPR300By8rZ6CLiYj6+dCMxfSI+ZGMDFR8O mWRYOJmu3gyNq2xIMjGD+Hy0hlBpicK+UnnWpfO4wX25N9JzlyxFb7SIQXEDEEbiXR6m 0cbMBEZmIVHputJB0mmwqHtgRNIkG3dlEEsLYjOxe3WkW9rnaScKwO7uDVM3PDZlw9sd ntNIpEiHkcoKLTcUhPptpKHDv2n1GV9P8bn8bhsRu0LdMLUGWMUKWgRZoLregImZvynG QA/g== X-Gm-Message-State: AOJu0YwB49LTVGBuz2dUcqdsDKCQbjWTOHNR74WnkjoE6Ds9m3BxV+p/ azWvZF40k13RwkeX7H5IouXBvdlr0+dHhxQcDdD1s8hzgE7XX8rvd11snVVSR9lJoEIz4gQ89ZN 4x+gj X-Gm-Gg: ASbGncsHsMrYSetohvNcF/2f+WUZdKLFp7GfPLh1ok7WmniXCnaNoGW+lBBhoSa+Vzx NjgX4xtJXI/Chg9gmFgLdWGc/Lp9mNZKY32L83vr9ReexAqEno7ftt8YicV/DXYRJI/Sv0a+MUD JATbg8j+KZHQTpsbYpXe7jN9JKo1FPgdgJ6uzZFl4OnbmvviMJRPHoR29RLBu5oYqdhxEqZ4KMp TZmfR2n/JU2xZXQVtYzXpywimYFw9OF7NzIJjnDCqdRgGZIf5LbCSsLWGxEpfxUlvqDH8LK0nOL kdncwWR7HnZxOeMZjbuk6PZJVt4UuOAcZR+i/NqAY0cLLWB54mlGCCzUfvqoE9RN7lSkEjGgqtG eEZmUB1oPSDr1RDXNSkRoSAgUlzPB0BKg5tz4bFffwco= X-Google-Smtp-Source: AGHT+IGvQDhWlwxb5NDJCZYMeo+lBRIwek+uX910S1FIz+BZn5OEkxfK1cx6XHaMAeeT2WiRvMvisA== X-Received: by 2002:a17:902:f70b:b0:250:1c22:e78 with SMTP id d9443c01a7336-29027356cdbmr347094855ad.1.1760481908626; Tue, 14 Oct 2025 15:45:08 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:08 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 07/14] glibc: nptl Remove unnecessary catch-all-wake in condvar group switch Date: Tue, 14 Oct 2025 15:44:44 -0700 Message-ID: <18b4f22aaae19cd0efb21433f0c23c5580246a2e.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224861 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=b42cc6af11062c260c7dfa91f1c89891366fed3e [2] https://sourceware.org/pipermail/libc-stable/2025-July/002274.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-3.patch | 79 +++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 80 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-3.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-3.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-3.patch new file mode 100644 index 0000000000..4228be95d1 --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-3.patch @@ -0,0 +1,79 @@ +From 5f22e8cf95cf6b3b2e16ddb03820ae3e77fd420d Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 04:47:48 -0700 +Subject: [PATCH] nptl: Remove unnecessary catch-all-wake in condvar group + switch + +This wake is unnecessary. We only switch groups after every sleeper in a group +has been woken. Sure, they may take a while to actually wake up and may still +hold a reference, but waking them a second time doesn't speed that up. Instead +this just makes the code more complicated and may hide problems. + +In particular this safety wake wouldn't even have helped with the bug that was +fixed by Barrus' patch: The bug there was that pthread_cond_signal would not +switch g1 when it should, so we wouldn't even have entered this code path. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: b42cc6af11062c260c7dfa91f1c89891366fed3e + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002274.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_common.c | 31 +------------------------------ + 1 file changed, 1 insertion(+), 30 deletions(-) + +diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c +index 350a16fa..f976a533 100644 +--- a/nptl/pthread_cond_common.c ++++ b/nptl/pthread_cond_common.c +@@ -221,13 +221,7 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + * New waiters arriving concurrently with the group switching will all go + into G2 until we atomically make the switch. Waiters existing in G2 + are not affected. +- * Waiters in G1 have already received a signal and been woken. If they +- haven't woken yet, they will be closed out immediately by the advancing +- of __g_signals to the next "lowseq" (low 31 bits of the new g1_start), +- which will prevent waiters from blocking using a futex on +- __g_signals since it provides enough signals for all possible +- remaining waiters. As a result, they can each consume a signal +- and they will eventually remove their group reference. */ ++ * Waiters in G1 have already received a signal and been woken. */ + + /* Update __g1_start, which finishes closing this group. The value we add + will never be negative because old_orig_size can only be zero when we +@@ -240,29 +234,6 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + + unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; + +- /* If any waiters still hold group references (and thus could be blocked), +- then wake them all up now and prevent any running ones from blocking. +- This is effectively a catch-all for any possible current or future +- bugs that can allow the group size to reach 0 before all G1 waiters +- have been awakened or at least given signals to consume, or any +- other case that can leave blocked (or about to block) older waiters.. */ +- if ((atomic_fetch_or_release (cond->__data.__g_refs + g1, 0) >> 1) > 0) +- { +- /* First advance signals to the end of the group (i.e. enough signals +- for the entire G1 group) to ensure that waiters which have not +- yet blocked in the futex will not block. +- Note that in the vast majority of cases, this should never +- actually be necessary, since __g_signals will have enough +- signals for the remaining g_refs waiters. As an optimization, +- we could check this first before proceeding, although that +- could still leave the potential for futex lost wakeup bugs +- if the signal count was non-zero but the futex wakeup +- was somehow lost. */ +- atomic_store_release (cond->__data.__g_signals + g1, lowseq); +- +- futex_wake (cond->__data.__g_signals + g1, INT_MAX, private); +- } +- + /* At this point, the old G1 is now a valid new G2 (but not in use yet). + No old waiter can neither grab a signal nor acquire a reference without + noticing that __g1_start is larger. +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 732cb96f94..0787dfe236 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -64,6 +64,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0024-fix-create-thread-failed-in-unprivileged-process-BZ-.patch \ file://0026-PR25847-1.patch \ file://0026-PR25847-2.patch \ + file://0026-PR25847-3.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD727CCD196 for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by mx.groups.io with SMTP id smtpd.web11.2578.1760481910946458858 for ; Tue, 14 Oct 2025 15:45:11 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=IKtb4528; spf=softfail (domain: sakoman.com, ip: 209.85.214.176, mailfrom: steve@sakoman.com) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-27ee41e0798so93651795ad.1 for ; Tue, 14 Oct 2025 15:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481910; x=1761086710; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=FzsBlV/pTmorZE2GFMsUHkpAuQGkT9MnDZiIUz7ZphY=; b=IKtb4528RbdJHmDOzKfH+ZfGUqTFAqiJ7oXx0XqAOY4tuN0HBihiDzlB05xBHwM1yi gkTI9GCxBquTyw163z+x18enssLSZBLENoseuyiOy13QntvIEpTLSpXPvKcQHVRCNYCO Bi0xW4VFyltcKOzJfAQKu8orqQX7nlpVDXfxF08DOIYTcQvJrzaDOBMQPKi69ciW6atf mN16sTmramRTJGYurhWWuxUBtxK9WAzIoV6Engs5x+mBcl7jyR/LYxkY9oqMuZ4BQONw aoMNVkJX203KIkis3/OcVIsP2j1KowyaTkKOMKNs/wBHB4XwE7hD/ADlieBzhEnGLpbF e7qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481910; x=1761086710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FzsBlV/pTmorZE2GFMsUHkpAuQGkT9MnDZiIUz7ZphY=; b=svUM3jo9WkQrCynoK4hhy8WC6w3Y/zWMSEnBuxUakesUsZzNxHUktkbAm9skuyLWtg 5ePgLP32K7SeDwxVpmT7ILTI9krbqapO9hvW23BbxbmkdUcR07dSik4uVuz+F6Oo3eWh Ltn0++MEOHFi/djNJTiBU3bfqsJoX98YnEuW7iJb+DXUpQrsJGBevXHUEQLOmWqxvW9K oJlIq0Nlc/vXSaNRtZi2IzuRP/4y8vRznxoRMO8Y0nLyTBdTbVaYDcL13MU4zwZ6LJxk F7axuq6QrNudmeN/j0a8kn63VHyux81nXWWRHO7l/aRQgOlrPoV/UZKz/TRILc9Bk6cz yCSg== X-Gm-Message-State: AOJu0YyGlOFji5ffwN7nnbb9XdGaHn/M52W1GabL/LfeUwntoMGEu32K 47+5DOf9Pc4GvIP98R7DJeTYS9eOBvfJKX7QvB6C1rIcqpxd2etRWFjkPvn3gWNiPcQV3gW+bUV ZbXt2 X-Gm-Gg: ASbGncstPhj46o3B/voRyczafFMOnH2lwEepX8C1YrVlmO3eSgnoNbVkR7prQCiBXk7 +yvrQnTq9G4mE0muBBv5af9DeovEPMnmaVnTB1xHAND2QSNloENN1Fpfi1RSupF6H1tDK/vVam0 Mz9p27N2PjFWL385jQvG0eVkTuluDVx8IFLwepdGiYfvHSvj3GJrXuMNmG3oBWCeRtv14kdnyux aTaVO8M0rvZ6mbKmip8vqrPI7R6cssnlP1mUTdcTYqT55IHhz+z7Bu0AItjafZ7en/qp42qs2af DU6mKtKjlROEClETbNl1LKq2IrsrJy2RNENVh6G2uf4MgNtjFjsoFsMnrl7PHMJVMqfUAXA9IZ/ cHeOA/3GKZTZXr5TeIqs8B2Gi0Yp7MhUY X-Google-Smtp-Source: AGHT+IGE2b1PyPj8WCGhtnIF97w5LhSf9xxkhkl8axPobBU3IEUJ2nUPYKRHqbX20XXtfFQQTOjBHQ== X-Received: by 2002:a17:903:298c:b0:265:982a:d450 with SMTP id d9443c01a7336-290273ffbf1mr352326845ad.40.1760481910080; Tue, 14 Oct 2025 15:45:10 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:09 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 08/14] glibc: nptl Remove unnecessary quadruple check in pthread_cond_wait Date: Tue, 14 Oct 2025 15:44:45 -0700 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224862 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=4f7b051f8ee3feff1b53b27a906f245afaa9cee1 [2] https://sourceware.org/pipermail/libc-stable/2025-July/002276.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-4.patch | 118 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 119 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-4.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-4.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-4.patch new file mode 100644 index 0000000000..475462c880 --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-4.patch @@ -0,0 +1,118 @@ +From d714165c8bb3cac420077cfa61e3df87ea7f8b2c Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 05:34:06 -0700 +Subject: [PATCH] nptl: Remove unnecessary quadruple check in pthread_cond_wait + +pthread_cond_wait was checking whether it was in a closed group no less than +four times. Checking once is enough. Here are the four checks: + +1. While spin-waiting. This was dead code: maxspin is set to 0 and has been + for years. +2. Before deciding to go to sleep, and before incrementing grefs: I kept this +3. After incrementing grefs. There is no reason to think that the group would + close while we do an atomic increment. Obviously it could close at any + point, but that doesn't mean we have to recheck after every step. This + check was equally good as check 2, except it has to do more work. +4. When we find ourselves in a group that has a signal. We only get here after + we check that we're not in a closed group. There is no need to check again. + The check would only have helped in cases where the compare_exchange in the + next line would also have failed. Relying on the compare_exchange is fine. + +Removing the duplicate checks clarifies the code. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: 4f7b051f8ee3feff1b53b27a906f245afaa9cee1 + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002276.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_wait.c | 49 ---------------------------------------- + 1 file changed, 49 deletions(-) + +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index cee19687..47e834ca 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -366,7 +366,6 @@ static __always_inline int + __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + clockid_t clockid, const struct __timespec64 *abstime) + { +- const int maxspin = 0; + int err; + int result = 0; + +@@ -425,33 +424,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + +- /* Spin-wait first. +- Note that spinning first without checking whether a timeout +- passed might lead to what looks like a spurious wake-up even +- though we should return ETIMEDOUT (e.g., if the caller provides +- an absolute timeout that is clearly in the past). However, +- (1) spurious wake-ups are allowed, (2) it seems unlikely that a +- user will (ab)use pthread_cond_wait as a check for whether a +- point in time is in the past, and (3) spinning first without +- having to compare against the current time seems to be the right +- choice from a performance perspective for most use cases. */ +- unsigned int spin = maxspin; +- while (spin > 0 && ((int)(signals - lowseq) < 2)) +- { +- /* Check that we are not spinning on a group that's already +- closed. */ +- if (seq < (g1_start >> 1)) +- break; +- +- /* TODO Back off. */ +- +- /* Reload signals. See above for MO. */ +- signals = atomic_load_acquire (cond->__data.__g_signals + g); +- g1_start = __condvar_load_g1_start_relaxed (cond); +- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; +- spin--; +- } +- + if (seq < (g1_start >> 1)) + { + /* If the group is closed already, +@@ -482,24 +454,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + an atomic read-modify-write operation and thus extend the release + sequence. */ + atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); +- signals = atomic_load_acquire (cond->__data.__g_signals + g); +- g1_start = __condvar_load_g1_start_relaxed (cond); +- lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; +- +- if (seq < (g1_start >> 1)) +- { +- /* group is closed already, so don't block */ +- __condvar_dec_grefs (cond, g, private); +- goto done; +- } +- +- if ((int)(signals - lowseq) >= 2) +- { +- /* a signal showed up or G1/G2 switched after we grabbed the +- refcount */ +- __condvar_dec_grefs (cond, g, private); +- break; +- } + + // Now block. + struct _pthread_cleanup_buffer buffer; +@@ -533,9 +487,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + /* Reload signals. See above for MO. */ + signals = atomic_load_acquire (cond->__data.__g_signals + g); + } +- +- if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) +- goto done; + } + /* Try to grab a signal. See above for MO. (if we do another loop + iteration we need to see the correct value of g1_start) */ +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 0787dfe236..f9086c0855 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -65,6 +65,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-1.patch \ file://0026-PR25847-2.patch \ file://0026-PR25847-3.patch \ + file://0026-PR25847-4.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5D37CCD197 for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by mx.groups.io with SMTP id smtpd.web11.2580.1760481912581889073 for ; Tue, 14 Oct 2025 15:45:12 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=Tytpzupz; spf=softfail (domain: sakoman.com, ip: 209.85.215.170, mailfrom: steve@sakoman.com) Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-b550a522a49so4856287a12.2 for ; Tue, 14 Oct 2025 15:45:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481912; x=1761086712; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=/bSv9Jjr29pQHQmmMlpqj3cUyO7lAlMf96YU4p4vA8A=; b=Tytpzupz/5hRedG3Cladiit21Hl6Vftv2u3p1O2MRYwMntxjjhQwdtzwom8ITJ/7Yc qLrrBvvhYeBlcAn/5db7CFCkWPJ21lakcNaqmy4chKay99X4hfHx5UAwzWb4FI+QVRgu T2Ru/UqguwY/yRN7Ekb7SSFL7phoYzOQpPmUiDJtMSvoCJpeq8562OnzJUQBuARqAo7W jxw71M8BJGCSNwVv8DPDWHBjjZOmawsUiLg+oyqqrL94esaflE5W4xEClVMWQWTqiGcb tkxYADi2miwqWildeietOvDoWub9mNoInpA5bkBqSEHKSZ4sdsIfMG017X2/gjifayu4 P0WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481912; x=1761086712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/bSv9Jjr29pQHQmmMlpqj3cUyO7lAlMf96YU4p4vA8A=; b=W5l5ZWR/J1V0mQxbWFFrgJqK1d0vnLubrbvRHw4YL8GvPsDiOTDpto2nlw0g9DTlBk NSLuzJYq6P3ycJQP0jeiDxRDle+WQ3VB1kx9LJs+sGYNSI5nV9dqeSD17flR9v1rwpX4 9v9a7fNlPsltHjqMGpNFQfEG17ZXf9zkL8Rjw6v1vHJi02Xql52MMTP0B0WMrRpCYhFo nk8WljXvNaWGQRgwwn2BJzxebYy7BqGSjb6BOoCW56sCIo1v7dXekpr1l85H1pB2LlgP NjgpQ0OnUx4XMGWVGBCS510a2KPtHI9dVgRUBftd7k4X9/wYV50xixMy5565YRkuT1kb fRuw== X-Gm-Message-State: AOJu0Yw+TQOk3HcS/GvtviJ9e6W2dIK41jOOnrJlosueO5a42rAi1QM9 0X4c0neVto1KNx3/Xw5/25qB0+QCPRRSo8I6jkCcgVAcLAS4w4plBx53bfoFFuKp8xpvvYSQG9Z TK9lc X-Gm-Gg: ASbGncueNqUnnJ3/2XAfY4Ww1DE4qq05lOFpBgT/LT1S2twlRitt94u1uKsgwP5PEtu 6eCklFF8LpT9kabu3Diy7illoBYCKPF/M9PTEJcoZTZdontvkn8yZf9VK1mp4ibayNiMgx/T6a0 saTM9dDWawMIozXRa0fEShZebhhtD5lKielh7RZnJAdkLIrkp3UHPIgYa54qcpI49TL+YED+UvP Cs01sym6ypGtldxGgkOfe3PlxO34Rief7rKpLM/HfEpQpvPqupp4thJf0o965PYxEFde2X03a0U wuTkUGNPr44dV+GEjX5QPrrX9pqAte0PD8Nh4/caSjB2Cqq32KWMFOC4SaGECWveE7NFIXcDbtA 7/UT5WYjFa4MY+zV7tj7uGhpZ34KttMuHNcKNJYtxJKw= X-Google-Smtp-Source: AGHT+IGdyrZW9fwfwfIXrxBIyUUZUIL2PMJJIzXUUoRKHdimM3PYaRgBwXbWqVloDrggUsfTXNe6Sw== X-Received: by 2002:a17:902:ce81:b0:266:f01a:98c4 with SMTP id d9443c01a7336-2902723e35cmr376298225ad.13.1760481911676; Tue, 14 Oct 2025 15:45:11 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:11 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 09/14] glibc: Remove g_refs from condition variables Date: Tue, 14 Oct 2025 15:44:46 -0700 Message-ID: <1972b6776fa8a23b9d373d516ace32e136e9058f.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224863 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=c36fc50781995e6758cae2b6927839d0157f213c [2] https://sourceware.org/pipermail/libc-stable/2025-July/002278.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-5.patch | 188 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 189 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-5.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-5.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-5.patch new file mode 100644 index 0000000000..e50e942471 --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-5.patch @@ -0,0 +1,188 @@ +From f904a81ff8d0469ceaf3220329e716c03fcbd2d3 Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 05:59:02 -0700 +Subject: [PATCH] nptl: Remove g_refs from condition variables + +This variable used to be needed to wait in group switching until all sleepers +have confirmed that they have woken. This is no longer needed. Nothing waits +on this variable so there is no need to track how many threads are currently +asleep in each group. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +cmmit: c36fc50781995e6758cae2b6927839d0157f213c + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002278.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_wait.c | 52 +------------------------ + nptl/tst-cond22.c | 12 +++--- + sysdeps/nptl/bits/thread-shared-types.h | 3 +- + sysdeps/nptl/pthread.h | 2 +- + 4 files changed, 9 insertions(+), 60 deletions(-) + +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index 47e834ca..8a9219e0 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -143,23 +143,6 @@ __condvar_cancel_waiting (pthread_cond_t *cond, uint64_t seq, unsigned int g, + } + } + +-/* Wake up any signalers that might be waiting. */ +-static void +-__condvar_dec_grefs (pthread_cond_t *cond, unsigned int g, int private) +-{ +- /* Release MO to synchronize-with the acquire load in +- __condvar_quiesce_and_switch_g1. */ +- if (atomic_fetch_add_release (cond->__data.__g_refs + g, -2) == 3) +- { +- /* Clear the wake-up request flag before waking up. We do not need more +- than relaxed MO and it doesn't matter if we apply this for an aliased +- group because we wake all futex waiters right after clearing the +- flag. */ +- atomic_fetch_and_relaxed (cond->__data.__g_refs + g, ~(unsigned int) 1); +- futex_wake (cond->__data.__g_refs + g, INT_MAX, private); +- } +-} +- + /* Clean-up for cancellation of waiters waiting for normal signals. We cancel + our registration as a waiter, confirm we have woken up, and re-acquire the + mutex. */ +@@ -171,8 +154,6 @@ __condvar_cleanup_waiting (void *arg) + pthread_cond_t *cond = cbuffer->cond; + unsigned g = cbuffer->wseq & 1; + +- __condvar_dec_grefs (cond, g, cbuffer->private); +- + __condvar_cancel_waiting (cond, cbuffer->wseq >> 1, g, cbuffer->private); + /* FIXME With the current cancellation implementation, it is possible that + a thread is cancelled after it has returned from a syscall. This could +@@ -327,15 +308,6 @@ __condvar_cleanup_waiting (void *arg) + sufficient because if a waiter can see a sufficiently large value, it could + have also consume a signal in the waiters group. + +- It is essential that the last field in pthread_cond_t is __g_signals[1]: +- The previous condvar used a pointer-sized field in pthread_cond_t, so a +- PTHREAD_COND_INITIALIZER from that condvar implementation might only +- initialize 4 bytes to zero instead of the 8 bytes we need (i.e., 44 bytes +- in total instead of the 48 we need). __g_signals[1] is not accessed before +- the first group switch (G2 starts at index 0), which will set its value to +- zero after a harmless fetch-or whose return value is ignored. This +- effectively completes initialization. +- + + Limitations: + * This condvar isn't designed to allow for more than +@@ -440,21 +412,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + if ((int)(signals - lowseq) >= 2) + break; + +- /* No signals available after spinning, so prepare to block. +- We first acquire a group reference and use acquire MO for that so +- that we synchronize with the dummy read-modify-write in +- __condvar_quiesce_and_switch_g1 if we read from that. In turn, +- in this case this will make us see the advancement of __g_signals +- to the upcoming new g1_start that occurs with a concurrent +- attempt to reuse the group's slot. +- We use acquire MO for the __g_signals check to make the +- __g1_start check work (see spinning above). +- Note that the group reference acquisition will not mask the +- release MO when decrementing the reference count because we use +- an atomic read-modify-write operation and thus extend the release +- sequence. */ +- atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); +- + // Now block. + struct _pthread_cleanup_buffer buffer; + struct _condvar_cleanup_buffer cbuffer; +@@ -471,18 +428,11 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + + if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) + { +- __condvar_dec_grefs (cond, g, private); +- /* If we timed out, we effectively cancel waiting. Note that +- we have decremented __g_refs before cancellation, so that a +- deadlock between waiting for quiescence of our group in +- __condvar_quiesce_and_switch_g1 and us trying to acquire +- the lock during cancellation is not possible. */ ++ /* If we timed out, we effectively cancel waiting. */ + __condvar_cancel_waiting (cond, seq, g, private); + result = err; + goto done; + } +- else +- __condvar_dec_grefs (cond, g, private); + + /* Reload signals. See above for MO. */ + signals = atomic_load_acquire (cond->__data.__g_signals + g); +diff --git a/nptl/tst-cond22.c b/nptl/tst-cond22.c +index 1336e9c7..bdcb45c5 100644 +--- a/nptl/tst-cond22.c ++++ b/nptl/tst-cond22.c +@@ -106,13 +106,13 @@ do_test (void) + status = 1; + } + +- printf ("cond = { 0x%x:%x, 0x%x:%x, %u/%u/%u, %u/%u/%u, %u, %u }\n", ++ printf ("cond = { 0x%x:%x, 0x%x:%x, %u/%u, %u/%u, %u, %u }\n", + c.__data.__wseq.__value32.__high, + c.__data.__wseq.__value32.__low, + c.__data.__g1_start.__value32.__high, + c.__data.__g1_start.__value32.__low, +- c.__data.__g_signals[0], c.__data.__g_refs[0], c.__data.__g_size[0], +- c.__data.__g_signals[1], c.__data.__g_refs[1], c.__data.__g_size[1], ++ c.__data.__g_signals[0], c.__data.__g_size[0], ++ c.__data.__g_signals[1], c.__data.__g_size[1], + c.__data.__g1_orig_size, c.__data.__wrefs); + + if (pthread_create (&th, NULL, tf, (void *) 1l) != 0) +@@ -152,13 +152,13 @@ do_test (void) + status = 1; + } + +- printf ("cond = { 0x%x:%x, 0x%x:%x, %u/%u/%u, %u/%u/%u, %u, %u }\n", ++ printf ("cond = { 0x%x:%x, 0x%x:%x, %u/%u, %u/%u, %u, %u }\n", + c.__data.__wseq.__value32.__high, + c.__data.__wseq.__value32.__low, + c.__data.__g1_start.__value32.__high, + c.__data.__g1_start.__value32.__low, +- c.__data.__g_signals[0], c.__data.__g_refs[0], c.__data.__g_size[0], +- c.__data.__g_signals[1], c.__data.__g_refs[1], c.__data.__g_size[1], ++ c.__data.__g_signals[0], c.__data.__g_size[0], ++ c.__data.__g_signals[1], c.__data.__g_size[1], + c.__data.__g1_orig_size, c.__data.__wrefs); + + return status; +diff --git a/sysdeps/nptl/bits/thread-shared-types.h b/sysdeps/nptl/bits/thread-shared-types.h +index 5653507e..6f17afa4 100644 +--- a/sysdeps/nptl/bits/thread-shared-types.h ++++ b/sysdeps/nptl/bits/thread-shared-types.h +@@ -95,8 +95,7 @@ struct __pthread_cond_s + { + __atomic_wide_counter __wseq; + __atomic_wide_counter __g1_start; +- unsigned int __g_refs[2] __LOCK_ALIGNMENT; +- unsigned int __g_size[2]; ++ unsigned int __g_size[2] __LOCK_ALIGNMENT; + unsigned int __g1_orig_size; + unsigned int __wrefs; + unsigned int __g_signals[2]; +diff --git a/sysdeps/nptl/pthread.h b/sysdeps/nptl/pthread.h +index dedad4ec..bbb36540 100644 +--- a/sysdeps/nptl/pthread.h ++++ b/sysdeps/nptl/pthread.h +@@ -152,7 +152,7 @@ enum + + + /* Conditional variable handling. */ +-#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, {0, 0}, 0, 0, {0, 0} } } ++#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0} } } + + + /* Cleanup buffers */ +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index f9086c0855..e744260e87 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -66,6 +66,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-2.patch \ file://0026-PR25847-3.patch \ file://0026-PR25847-4.patch \ + file://0026-PR25847-5.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97356CCD184 for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by mx.groups.io with SMTP id smtpd.web10.2521.1760481914014727207 for ; Tue, 14 Oct 2025 15:45:14 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=OLH/kkJ0; spf=softfail (domain: sakoman.com, ip: 209.85.214.182, mailfrom: steve@sakoman.com) Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-27c369f898fso84710215ad.3 for ; Tue, 14 Oct 2025 15:45:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481913; x=1761086713; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=o7RDkLUWowFPw+2PFZjFHI4glTHju6UNbdMufEcWvLY=; b=OLH/kkJ06GSi5Dnqax923ufEA7o+/amdFmtnTOxw+JjEy8HXUOwxsjseGf0OMtCRyE YKc1u1jgqhJJsrlKbmIEFLbK4nJk1537PfrDywWiTaqSwL6HrP2WlzbFTb0cU1izRUds fzy+s26LFrHj21uCPzsNQDOszlSPePmMUxhtPaK1RAt85mfSvCfst8LO6g5gcftiivE0 KMdG2mU6O0DWdSNKX+ohlmleZ/Q8hXPBaxLKZNI14BUQ8OqiRmGBqnnHidZNvlxlX6M7 uPMVECXokJNvHh+eKivQpGlHurh/TANp4zY/jKWXG7fvi1flWWmtfsbBJxJ+N4hRFiON 9hVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481913; x=1761086713; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o7RDkLUWowFPw+2PFZjFHI4glTHju6UNbdMufEcWvLY=; b=oVj5ex59nWKf7M4gOrBtBaALOoXjDUGoNSqU+T6apKmqmBEkwWNaqZtx/4gpiNdjdt wQd4jUWTvHO6Eruwf/sGcK6eWy/pNqzn9kz92gwf19YUctHdZTha9ToN5Fqc20VcMYaz aElrdLxULOrhuPQdk8cHjHJF+Bi//rkBv0o8xjZbhhCYkFKdqhVnGJSaI5oyVZ/kkith L3nRvzcU4JL2p2IYdcB8srHdT2Wz6Hi7mmvASxMd6Val0Ty1NJEHLERR3beocYCxHczt 9GYd3dUYT4pRaoPJwiAGnosBcensCqUEF4Ry25dPd2zA8DDhhkMuweGnUF5YkZ5f0cTO pZ0g== X-Gm-Message-State: AOJu0YxZ0sgLMTMFUEJkdHu/lEmJNSKabNUGKs+IGnczZHb87Uwx9E9H 5DE83D+iOblpwhyVVcfDS4j1Y2LlS3LiwUPs4X1u+x8K9ONzfIAG/9W2CYfexCctua8+nDMQyJt sMX8L X-Gm-Gg: ASbGncsu5Mb3u9OtPlm+feeBC2fOhEfpP9LShhN9pGYVBCw3yr11dVPjwfz5EweTJvS wNPXkmV2xhTCFQpAYE3AIPywu0nYaQfS6yylszdFBMRTjfXbON7hI/YQNUMfP38hSdbQjrsFoIt 8Ey/lvqh8EY6m6Yo4JWWepiEAaRxDWJ1SQm6Z33+bGK7x+NTHEp8LlonvDsays9Np5BOZUFnQGy g4tTEg6g1Pg0+fvtXTq/zatcxAFevnpuc77yFQD70rxfQjbUMleOkwYdPdLh6VkozCHFDVJopn7 qNzYcsb/vRyM+Y+ssTyBU7KOAQAhEUGZb0W4GTWf1ep/OL/MOB7MvF4hzDq9Y/neVJxQ40VaDYa mZkheD2iwzW02oLulXeXW1JbtqmSdi3WYcDX7dBNzdYY= X-Google-Smtp-Source: AGHT+IGyk6RhdDmmQYRKa43EWL5xy59i1Kmz2auZR5K0UsuCeOok5NwbH2acEVA9iI6lHZYGamQSvg== X-Received: by 2002:a17:903:9cb:b0:24d:f9f:de8f with SMTP id d9443c01a7336-2902723ca52mr370275655ad.17.1760481913135; Tue, 14 Oct 2025 15:45:13 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:12 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 10/14] glibc: nptl Use a single loop in pthread_cond_wait instaed of a nested loop Date: Tue, 14 Oct 2025 15:44:47 -0700 Message-ID: <75bbc8cb3a94640120d778916abb2edf78b89fd0.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224864 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=929a4764ac90382616b6a21f099192b2475da674 [2] https://sourceware.org/pipermail/libc-stable/2025-July/002279.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-6.patch | 103 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 104 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-6.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-6.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-6.patch new file mode 100644 index 0000000000..7d5c4fda5f --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-6.patch @@ -0,0 +1,103 @@ +From bbd7c84a1a14bf93bf1e5976d8a1540aabbf901b Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 06:19:02 -0700 +Subject: [PATCH] nptl: Use a single loop in pthread_cond_wait instaed of a + nested loop + +The loop was a little more complicated than necessary. There was only one +break statement out of the inner loop, and the outer loop was nearly empty. +So just remove the outer loop, moving its code to the one break statement in +the inner loop. This allows us to replace all gotos with break statements. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: 929a4764ac90382616b6a21f099192b2475da674 + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002279.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_wait.c | 41 +++++++++++++++++++--------------------- + 1 file changed, 19 insertions(+), 22 deletions(-) + +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index 8a9219e0..c8c99bbf 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -382,17 +382,15 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + return err; + } + +- /* Now wait until a signal is available in our group or it is closed. +- Acquire MO so that if we observe (signals == lowseq) after group +- switching in __condvar_quiesce_and_switch_g1, we synchronize with that +- store and will see the prior update of __g1_start done while switching +- groups too. */ +- unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); +- +- do +- { ++ + while (1) + { ++ /* Now wait until a signal is available in our group or it is closed. ++ Acquire MO so that if we observe (signals == lowseq) after group ++ switching in __condvar_quiesce_and_switch_g1, we synchronize with that ++ store and will see the prior update of __g1_start done while switching ++ groups too. */ ++ unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + +@@ -401,7 +399,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + /* If the group is closed already, + then this waiter originally had enough extra signals to + consume, up until the time its group was closed. */ +- goto done; ++ break; + } + + /* If there is an available signal, don't block. +@@ -410,7 +408,16 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + G2, but in either case we're allowed to consume the available + signal and should not block anymore. */ + if ((int)(signals - lowseq) >= 2) +- break; ++ { ++ /* Try to grab a signal. See above for MO. (if we do another loop ++ iteration we need to see the correct value of g1_start) */ ++ if (atomic_compare_exchange_weak_acquire ( ++ cond->__data.__g_signals + g, ++ &signals, signals - 2)) ++ break; ++ else ++ continue; ++ } + + // Now block. + struct _pthread_cleanup_buffer buffer; +@@ -431,19 +438,9 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + /* If we timed out, we effectively cancel waiting. */ + __condvar_cancel_waiting (cond, seq, g, private); + result = err; +- goto done; ++ break; + } +- +- /* Reload signals. See above for MO. */ +- signals = atomic_load_acquire (cond->__data.__g_signals + g); + } +- } +- /* Try to grab a signal. See above for MO. (if we do another loop +- iteration we need to see the correct value of g1_start) */ +- while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, +- &signals, signals - 2)); +- +- done: + + /* Confirm that we have been woken. We do that before acquiring the mutex + to allow for execution of pthread_cond_destroy while having acquired the +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index e744260e87..3034461e9e 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -67,6 +67,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-3.patch \ file://0026-PR25847-4.patch \ file://0026-PR25847-5.patch \ + file://0026-PR25847-6.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BC07CCD18E for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by mx.groups.io with SMTP id smtpd.web11.2582.1760481915478986504 for ; Tue, 14 Oct 2025 15:45:15 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=Oh6NixA/; spf=softfail (domain: sakoman.com, ip: 209.85.214.177, mailfrom: steve@sakoman.com) Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-27ee41e0798so93652635ad.1 for ; Tue, 14 Oct 2025 15:45:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481915; x=1761086715; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=IC16VDRqqB8ZnJEm7ff7M6z+FJmnTO5Wg6OYO7irJ+8=; b=Oh6NixA/scn0U3x+cRFly2MA26dQZa5zudrEN8Gty2Hm0Ft2k/MS9L+V7KQ7g1Doaf g064IZBN5Qh5TavIVSBsYyWCOm4aoPh2ut7PyPCHTBUKYrTveFZAaT46ys+9JiuHoxzs /kxciQILF1Lh1kU+8AbDic0y2KBq16IDITyS8zvRaFCPTQgLeEbR+7dHndp+Ef07GIoq IBVu8Rz7vezyG69uKI7y1cNgpjqtwuVrGIz0Ex+6KgqtquenKXcgY24mur4TB8xdPv0m WvtT5mnl+vTrJYwVXcCdasBfpDY6apxyeoWhhaQ3XQ2ZmrJ+eFCZI6LzY8ugTxG3zSPY wdfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481915; x=1761086715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IC16VDRqqB8ZnJEm7ff7M6z+FJmnTO5Wg6OYO7irJ+8=; b=S1/tPMX/cFm+Dy9A+xKSWJLo5/dZn5sb13CEZrpKFOstO37ekAqex4ph8TYYPCI1hY UOXhhBrF7/Q+6mPsOMEnWqO6fewoCEalz3GBe/TuyI41u+l7HI1QBrbFecZjtv0QfbuX JhTegr6Yoz2tjldXn68XfnIlowBC6jNbTrn5uEHyp10CBmBMth/9yWFpcwXyRnHbL40n Dl+oq+7wU206Ai4+5ItWEEQrkHJQ9IAmd45x6IZwYyljOOiXZziPtzBwLFNVaGpKZVwL uzKvmGIC3SGUSx/V6wuSG414JNvwpaITG/n8fAaPy419eGvQTZ8rduiQvLCQ5t0rLwOI eZIA== X-Gm-Message-State: AOJu0YwpuskLwTJ5arQRrvvmxe7SYyvl480OsYNkQSpjuKCbLsUIPID6 Jyv4nWqkOR4wgq/RbjamcpGE7IW6b5jTbENSqmPg6APLPKPk9B4BpqCDHfz+W7Jf6UTdC9OMWvC sHTEd X-Gm-Gg: ASbGncuJVg7ayBO9DqshiXfbRbY9pDN9CZvm01t3MZ6tQ3fmAaWEw9az1FSUf8nE+O1 8OuwXt0gKaXSfvCcSWh5oPUvCtAn1cWYwQVbmDNFIjhPS2+fm2liW5N4WPsJG1R/VnlyoRT6nSz 8zRt67xj2ryS4bmGyMhexMApKUVom+Erv4I+T6OtPHsoijCMsVWaAOwiXgqwNqasI81CCbEF0S9 tGvA76kWFEqOxzgNXLD8cZrd3MBp6COApF9aTq6c9fnmX5ejzPaV9PiXw2nRQfSflkbgcCEeO2a UlWkseDrDmfs8QQOeaOC36cj+a8DZwKxue8Uqoun3HV2aTykvD5m8QFRxoGGSJVeNyv7ncwADWK Md3UjTVEp7GBpwRWvI3iX+K/aCboYmO1z X-Google-Smtp-Source: AGHT+IEK875H7DoBSg4CFcd/HILiUhFkvsQcvfVXfl/iOm2I4c4dOhRiwc5Ux+GZeZQZGnT7QJ9aiA== X-Received: by 2002:a17:903:246:b0:24c:9309:5883 with SMTP id d9443c01a7336-290273ecb35mr341425395ad.28.1760481914563; Tue, 14 Oct 2025 15:45:14 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.13 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:14 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 11/14] glibc: nptl Fix indentation Date: Tue, 14 Oct 2025 15:44:48 -0700 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224865 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=ee6c14ed59d480720721aaacc5fb03213dc153da [2] https://sourceware.org/pipermail/libc-stable/2025-July/002280.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-7.patch | 149 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 150 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-7.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-7.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-7.patch new file mode 100644 index 0000000000..74cb49670b --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-7.patch @@ -0,0 +1,149 @@ +From 1077953950d1e8864c63222967141c67f51297f8 Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 06:27:04 -0700 +Subject: [PATCH] nptl: Fix indentation + +In my previous change I turned a nested loop into a simple loop. I'm doing +the resulting indentation changes in a separate commit to make the diff on +the previous commit easier to review. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: ee6c14ed59d480720721aaacc5fb03213dc153da + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002280.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_wait.c | 110 +++++++++++++++++++-------------------- + 1 file changed, 55 insertions(+), 55 deletions(-) + +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index c8c99bbf..adf26a80 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -383,65 +383,65 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + } + + +- while (1) +- { +- /* Now wait until a signal is available in our group or it is closed. +- Acquire MO so that if we observe (signals == lowseq) after group +- switching in __condvar_quiesce_and_switch_g1, we synchronize with that +- store and will see the prior update of __g1_start done while switching +- groups too. */ +- unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); +- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); +- unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; +- +- if (seq < (g1_start >> 1)) +- { +- /* If the group is closed already, +- then this waiter originally had enough extra signals to +- consume, up until the time its group was closed. */ +- break; +- } +- +- /* If there is an available signal, don't block. +- If __g1_start has advanced at all, then we must be in G1 +- by now, perhaps in the process of switching back to an older +- G2, but in either case we're allowed to consume the available +- signal and should not block anymore. */ +- if ((int)(signals - lowseq) >= 2) +- { +- /* Try to grab a signal. See above for MO. (if we do another loop +- iteration we need to see the correct value of g1_start) */ +- if (atomic_compare_exchange_weak_acquire ( +- cond->__data.__g_signals + g, ++ while (1) ++ { ++ /* Now wait until a signal is available in our group or it is closed. ++ Acquire MO so that if we observe (signals == lowseq) after group ++ switching in __condvar_quiesce_and_switch_g1, we synchronize with that ++ store and will see the prior update of __g1_start done while switching ++ groups too. */ ++ unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); ++ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); ++ unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; ++ ++ if (seq < (g1_start >> 1)) ++ { ++ /* If the group is closed already, ++ then this waiter originally had enough extra signals to ++ consume, up until the time its group was closed. */ ++ break; ++ } ++ ++ /* If there is an available signal, don't block. ++ If __g1_start has advanced at all, then we must be in G1 ++ by now, perhaps in the process of switching back to an older ++ G2, but in either case we're allowed to consume the available ++ signal and should not block anymore. */ ++ if ((int)(signals - lowseq) >= 2) ++ { ++ /* Try to grab a signal. See above for MO. (if we do another loop ++ iteration we need to see the correct value of g1_start) */ ++ if (atomic_compare_exchange_weak_acquire ( ++ cond->__data.__g_signals + g, + &signals, signals - 2)) +- break; +- else +- continue; +- } +- +- // Now block. +- struct _pthread_cleanup_buffer buffer; +- struct _condvar_cleanup_buffer cbuffer; +- cbuffer.wseq = wseq; +- cbuffer.cond = cond; +- cbuffer.mutex = mutex; +- cbuffer.private = private; +- __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); +- +- err = __futex_abstimed_wait_cancelable64 ( +- cond->__data.__g_signals + g, signals, clockid, abstime, private); +- +- __pthread_cleanup_pop (&buffer, 0); +- +- if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) +- { +- /* If we timed out, we effectively cancel waiting. */ +- __condvar_cancel_waiting (cond, seq, g, private); +- result = err; + break; +- } ++ else ++ continue; + } + ++ // Now block. ++ struct _pthread_cleanup_buffer buffer; ++ struct _condvar_cleanup_buffer cbuffer; ++ cbuffer.wseq = wseq; ++ cbuffer.cond = cond; ++ cbuffer.mutex = mutex; ++ cbuffer.private = private; ++ __pthread_cleanup_push (&buffer, __condvar_cleanup_waiting, &cbuffer); ++ ++ err = __futex_abstimed_wait_cancelable64 ( ++ cond->__data.__g_signals + g, signals, clockid, abstime, private); ++ ++ __pthread_cleanup_pop (&buffer, 0); ++ ++ if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) ++ { ++ /* If we timed out, we effectively cancel waiting. */ ++ __condvar_cancel_waiting (cond, seq, g, private); ++ result = err; ++ break; ++ } ++ } ++ + /* Confirm that we have been woken. We do that before acquiring the mutex + to allow for execution of pthread_cond_destroy while having acquired the + mutex. */ +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 3034461e9e..7ef2c8cb4c 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -68,6 +68,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-4.patch \ file://0026-PR25847-5.patch \ file://0026-PR25847-6.patch \ + file://0026-PR25847-7.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id A25EFCCD195 for ; Tue, 14 Oct 2025 22:45:17 +0000 (UTC) Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by mx.groups.io with SMTP id smtpd.web11.2585.1760481917023925570 for ; Tue, 14 Oct 2025 15:45:17 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=dW/9g5UZ; spf=softfail (domain: sakoman.com, ip: 209.85.210.180, mailfrom: steve@sakoman.com) Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-791c287c10dso4985793b3a.1 for ; Tue, 14 Oct 2025 15:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481916; x=1761086716; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=9l+3Q7eG3L1nlRP5yN5OY2qk4dWecU9QlhI2Mj4y7T4=; b=dW/9g5UZ0qCuG2zin9UIxCXnpIybYVRVVklixZXJOMbi2EkQPfM4cAW8y+qVpw+aJo I1Svi3gXUf6CGKgExFfvHbLrNDNtO/9e84nsVsJMWer/fsNOp67TE2nyEIBg+CShuemJ GOllF4btGCsoY7wQfWTCLXybIYWQI2KEls76QsKqCcpychWxBCdRJrhZb0asbqEcHbwY pjQCBKdLsZxFTabD3P5+S7X2YXL9Ick1q4z+jr3VObfwpKkpGQRaB50HDF/W3VXlvNQt YduaeXb4bYkCDbQUGt0NZnDw3mkMWCrmrbjL99dvt0GUxCukTWw3DekqjsdNYbrdR1KO zNtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481916; x=1761086716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9l+3Q7eG3L1nlRP5yN5OY2qk4dWecU9QlhI2Mj4y7T4=; b=UHP4fOVcgOf6hvGyumU8N4Af5x8dfsTiqmNd1sB3Be5TiO12QojvCxIc8v023rwkhX Ks2CVKGorYg2PS473aW3A4k/Fbc6U7X16BMmBHNQb0ZDvEHCpnorsaHT+wwHXEI6D8uA 8JX5SAbdFfDJ+wnm/HmvmeRU+Ap3aOGC24C0F61Oj5pYhrHNVRVhtyNyG0DXRV1dTedj yxY4hSqhKJg3pYVR2SmjjTtEIUTCsW/CD4fj4Ra8NPKI6SqxCeOlRkFoEUf/2zx7SYTZ VL48md0zPadlvrZUZ9yHZYYREDFfc+muwjLDXubngSe4Sqno4wxV8I+FO2fGUsTX/9HX hRfA== X-Gm-Message-State: AOJu0Yy+FLYs7KebEPNU4SyKzs5W/Jdt2jJ5sh013LXRoMYfGMSLAJi7 mFi8PdHRGLj89QCd2iWnjfsrKmUV07qbKzJW7tRVALjhLYWefPQ0MZ0QVPuQEH7lu3ZDbuF9IXz nUGI6 X-Gm-Gg: ASbGncvOeMUQ+dbHsZKJkIgWJkwI0KoATcHBamOQeRPHcwQOPNK5Fa1ZvUHJOukwK1u /SRzPJyxs4sG7A5hQZLkl9vtF1IjImrl/z9vVnpMHNBsmtfG5WFpjF/THPkcqqy5jiXFEMLw1Ic o9JJ1sPvggHuggvhcnLuSle1moQDdJWM6/RrsT7AJYjvYELR1ryL9vg4dfAJy4eJybqt7+01Kz8 OcXIP91XE8ghfNijrmJe8vyVnjqvgzDyPKEY9zd16L72gbQM6mTA14KLZFQGKPbfQCbAIsHCgwO kaCtYybGGv0SQOlVz4nHxDugWPSNOI1H3F8H/i750lmWpuKjaXlF0q6442jon/jN+qh3GWMKjBE YOJpxU1zdVCu4LNUg+OvgRqw7sBVoM5YwBmx9jSmOPWA= X-Google-Smtp-Source: AGHT+IEMcvCD0BNuai7mT9c2RPZ2caleHJGuQBixwEWFVl8ZjLwqbl0TQhZ9xseQvG0eSNFD9ljtaw== X-Received: by 2002:a17:902:f651:b0:24b:1589:5054 with SMTP id d9443c01a7336-2902723c4acmr325737015ad.23.1760481916107; Tue, 14 Oct 2025 15:45:16 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:15 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 12/14] glibc: nptl rename __condvar_quiesce_and_switch_g1 Date: Tue, 14 Oct 2025 15:44:49 -0700 Message-ID: <0a9ccd040037c12aa2e7fbc2213ca60b30dafcc4.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:17 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224866 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=4b79e27a5073c02f6bff9aa8f4791230a0ab1867 [2] https://sourceware.org/pipermail/libc-stable/2025-July/002281.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-8.patch | 161 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 162 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-8.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-8.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-8.patch new file mode 100644 index 0000000000..83d9ca01a9 --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-8.patch @@ -0,0 +1,161 @@ +From 20d84dfa0b9a32f88259269bbeaae588744ae4ae Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 06:33:50 -0700 +Subject: [PATCH] nptl: rename __condvar_quiesce_and_switch_g1 + +This function no longer waits for threads to leave g1, so rename it to +__condvar_switch_g1 + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: 4b79e27a5073c02f6bff9aa8f4791230a0ab1867 + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002281.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_broadcast.c | 4 ++-- + nptl/pthread_cond_common.c | 26 ++++++++++++-------------- + nptl/pthread_cond_signal.c | 17 ++++++++--------- + nptl/pthread_cond_wait.c | 9 ++++----- + 4 files changed, 26 insertions(+), 30 deletions(-) + +diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c +index 5ae141ac..a0743558 100644 +--- a/nptl/pthread_cond_broadcast.c ++++ b/nptl/pthread_cond_broadcast.c +@@ -60,7 +60,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) + cond->__data.__g_size[g1] << 1); + cond->__data.__g_size[g1] = 0; + +- /* We need to wake G1 waiters before we quiesce G1 below. */ ++ /* We need to wake G1 waiters before we switch G1 below. */ + /* TODO Only set it if there are indeed futex waiters. We could + also try to move this out of the critical section in cases when + G2 is empty (and we don't need to quiesce). */ +@@ -69,7 +69,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) + + /* G1 is complete. Step (2) is next unless there are no waiters in G2, in + which case we can stop. */ +- if (__condvar_quiesce_and_switch_g1 (cond, wseq, &g1, private)) ++ if (__condvar_switch_g1 (cond, wseq, &g1, private)) + { + /* Step (3): Send signals to all waiters in the old G2 / new G1. */ + atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, +diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c +index f976a533..3baac4da 100644 +--- a/nptl/pthread_cond_common.c ++++ b/nptl/pthread_cond_common.c +@@ -189,16 +189,15 @@ __condvar_get_private (int flags) + return FUTEX_SHARED; + } + +-/* This closes G1 (whose index is in G1INDEX), waits for all futex waiters to +- leave G1, converts G1 into a fresh G2, and then switches group roles so that +- the former G2 becomes the new G1 ending at the current __wseq value when we +- eventually make the switch (WSEQ is just an observation of __wseq by the +- signaler). ++/* This closes G1 (whose index is in G1INDEX), converts G1 into a fresh G2, ++ and then switches group roles so that the former G2 becomes the new G1 ++ ending at the current __wseq value when we eventually make the switch ++ (WSEQ is just an observation of __wseq by the signaler). + If G2 is empty, it will not switch groups because then it would create an + empty G1 which would require switching groups again on the next signal. + Returns false iff groups were not switched because G2 was empty. */ + static bool __attribute__ ((unused)) +-__condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, ++__condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + unsigned int *g1index, int private) + { + unsigned int g1 = *g1index; +@@ -214,8 +213,7 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + + cond->__data.__g_size[g1 ^ 1]) == 0) + return false; + +- /* Now try to close and quiesce G1. We have to consider the following kinds +- of waiters: ++ /* We have to consider the following kinds of waiters: + * Waiters from less recent groups than G1 are not affected because + nothing will change for them apart from __g1_start getting larger. + * New waiters arriving concurrently with the group switching will all go +@@ -223,12 +221,12 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + are not affected. + * Waiters in G1 have already received a signal and been woken. */ + +- /* Update __g1_start, which finishes closing this group. The value we add +- will never be negative because old_orig_size can only be zero when we +- switch groups the first time after a condvar was initialized, in which +- case G1 will be at index 1 and we will add a value of 1. +- Relaxed MO is fine because the change comes with no additional +- constraints that others would have to observe. */ ++ /* Update __g1_start, which closes this group. The value we add will never ++ be negative because old_orig_size can only be zero when we switch groups ++ the first time after a condvar was initialized, in which case G1 will be ++ at index 1 and we will add a value of 1. Relaxed MO is fine because the ++ change comes with no additional constraints that others would have to ++ observe. */ + __condvar_add_g1_start_relaxed (cond, + (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); + +diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c +index 14800ba0..a9bc10dc 100644 +--- a/nptl/pthread_cond_signal.c ++++ b/nptl/pthread_cond_signal.c +@@ -69,18 +69,17 @@ ___pthread_cond_signal (pthread_cond_t *cond) + bool do_futex_wake = false; + + /* If G1 is still receiving signals, we put the signal there. If not, we +- check if G2 has waiters, and if so, quiesce and switch G1 to the former +- G2; if this results in a new G1 with waiters (G2 might have cancellations +- already, see __condvar_quiesce_and_switch_g1), we put the signal in the +- new G1. */ ++ check if G2 has waiters, and if so, switch G1 to the former G2; if this ++ results in a new G1 with waiters (G2 might have cancellations already, ++ see __condvar_switch_g1), we put the signal in the new G1. */ + if ((cond->__data.__g_size[g1] != 0) +- || __condvar_quiesce_and_switch_g1 (cond, wseq, &g1, private)) ++ || __condvar_switch_g1 (cond, wseq, &g1, private)) + { + /* Add a signal. Relaxed MO is fine because signaling does not need to +- establish a happens-before relation (see above). We do not mask the +- release-MO store when initializing a group in +- __condvar_quiesce_and_switch_g1 because we use an atomic +- read-modify-write and thus extend that store's release sequence. */ ++ establish a happens-before relation (see above). We do not mask the ++ release-MO store when initializing a group in __condvar_switch_g1 ++ because we use an atomic read-modify-write and thus extend that ++ store's release sequence. */ + atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 2); + cond->__data.__g_size[g1]--; + /* TODO Only set it if there are indeed futex waiters. */ +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index adf26a80..40a74342 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -354,8 +354,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + because we do not need to establish any happens-before relation with + signalers (see __pthread_cond_signal); modification order alone + establishes a total order of waiters/signals. We do need acquire MO +- to synchronize with group reinitialization in +- __condvar_quiesce_and_switch_g1. */ ++ to synchronize with group reinitialization in __condvar_switch_g1. */ + uint64_t wseq = __condvar_fetch_add_wseq_acquire (cond, 2); + /* Find our group's index. We always go into what was G2 when we acquired + our position. */ +@@ -387,9 +386,9 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + { + /* Now wait until a signal is available in our group or it is closed. + Acquire MO so that if we observe (signals == lowseq) after group +- switching in __condvar_quiesce_and_switch_g1, we synchronize with that +- store and will see the prior update of __g1_start done while switching +- groups too. */ ++ switching in __condvar_switch_g1, we synchronize with that store and ++ will see the prior update of __g1_start done while switching groups ++ too. */ + unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 7ef2c8cb4c..265dcb9129 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -69,6 +69,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-5.patch \ file://0026-PR25847-6.patch \ file://0026-PR25847-7.patch \ + file://0026-PR25847-8.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id B668DCCD184 for ; Tue, 14 Oct 2025 22:45:27 +0000 (UTC) Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by mx.groups.io with SMTP id smtpd.web11.2589.1760481918841742618 for ; Tue, 14 Oct 2025 15:45:18 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=xfjbFOp4; spf=softfail (domain: sakoman.com, ip: 209.85.214.174, mailfrom: steve@sakoman.com) Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-27ee41e074dso67948095ad.1 for ; Tue, 14 Oct 2025 15:45:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481918; x=1761086718; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=FeRuLHNAAYYuitw6iDsi2f5xuD3ShDPaFx/itliUl5o=; b=xfjbFOp4NN3ovx8GZZ6f4mush2xB0w2veffiu9ba/Yga6Na9ApiFrZtkIUkt7TsV44 Rvvu7hV7b9d+j++Wqlmv536UOVt79+vruFXQSbrIuNUUq5zKFJfmC/fY3U9ylLrQ1xb8 6QMl6jPkF4fUDn/XboV+I+azca00MpMwLBy7AukO6USnUQViRR1eIcaUadUN/RSgK8vV 0aKhqbnqk4mmuVUIYGrFYG/QbLQlGQ8Ylf3jFz8+/zNorgvJeuFgTZApxMCXLlbxH8pD vJ+JJ9d2b2sVLYBA75DSgRAP7kL9uFMgPgTgfND2X8KdKFedA3ZDm1tXIILADAMz1eCb 2atg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481918; x=1761086718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FeRuLHNAAYYuitw6iDsi2f5xuD3ShDPaFx/itliUl5o=; b=gRG3LzjcF5FPzDCD5urpUBrWk/bUT3U/9CeWFzPiL7M+eLN8AhrNEsapatZE7dvS7w qwzOTmfrCAN9zUfNMr0B+rP6kTkZIIbKolca0TPn3SXQbblpNaW4JMArgRvfhfWSdyjH pfYR+/BHqK/6rBRIEGpHI4Z5LimbknDi4DNGbrrLC7ptls+TCodhOfckJMVwdD2wAhW/ mOfBB6blLyOGuX7EYVobMtszUEazM6oYlByt3N22r4ozKtjkYyTPUABbZpGBbTu6wdqv n9sFQ6+5Tq/NcH6pv748O1fGmg3GuqoPyny9A1gD+wR4eWjkyJLkjoQFTHXOAYBcZnF8 ZwPQ== X-Gm-Message-State: AOJu0YzQ5wTBEJ15oEOL0u5fkuw6553EF4LBgPZAwxKFcggCBnem9K2F Q7aMbB5kwtFna/9giQbt0qJV0CAOu9gQA70iDXe2bLYl8oVk7ZDM+RIwfcCrQYqQWr18JxCFc+B M1ih7 X-Gm-Gg: ASbGncvWTRIpzommOvJjl2NmVTDS/N191Gl4rj5lVzlKGHLIS1yUWjNCQsZCBCT3xep 9RtNWsGUDpziTpoQA7RgPWryFBuUa88JaFUiIiLuHhZdtYEwRmjmKEnJ1hcjdXjiLWScTB8zmC/ mpDluk3bfOnf6N5cESLLG9C0ltlTkT/sex31D/LkjIjBS3XFJVPGpjfByAXbkVkG8shfqy9Mfob GR+uKCliZKJsi3/pS7N8IpcMBeCGSQGiEeaLk9WYhpfnwtTDkgROiSD2YWH850ZHymxtlEtVV1J pjY9wdWC3bgDbrfFAHXeeROEab09+h4jEoEsV/nyEVIBIyqe+KmEwSaWygdYYul3jcOj3+GMe6O BoPL0aZ9bxBve0wxjxuKuRvkAxxmLkiyGwBQuBTk7tGM= X-Google-Smtp-Source: AGHT+IGl/Cw7/nNujG7Cv/0Rb2tfh7yDFsZmP4wlOTHVbxDFSNTg4r3ICmNX15n4w4bQCiPLpJrAnQ== X-Received: by 2002:a17:903:1a03:b0:269:8059:83ab with SMTP id d9443c01a7336-290272e1ccamr298660605ad.51.1760481917908; Tue, 14 Oct 2025 15:45:17 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:17 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 13/14] glibc: nptl Use all of g1_start and g_signals Date: Tue, 14 Oct 2025 15:44:50 -0700 Message-ID: <4593e800b832d740d0b63ddd4b5c948c564116b2.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:27 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224867 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=91bb902f58264a2fd50fbce8f39a9a290dd23706 [2] https://sourceware.org/pipermail/libc-stable/2025-July/002283.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-9.patch | 193 ++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 194 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-9.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-9.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-9.patch new file mode 100644 index 0000000000..49815c6fb7 --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-9.patch @@ -0,0 +1,193 @@ +From c2677e68956bb9677d8de4ee6c5341b1a744d490 Mon Sep 17 00:00:00 2001 +From: Malte Skarupke +Date: Tue, 14 Oct 2025 06:40:57 -0700 +Subject: [PATCH] nptl: Use all of g1_start and g_signals + +The LSB of g_signals was unused. The LSB of g1_start was used to indicate +which group is G2. This was used to always go to sleep in pthread_cond_wait +if a waiter is in G2. A comment earlier in the file says that this is not +correct to do: + + "Waiters cannot determine whether they are currently in G2 or G1 -- but they + do not have to because all they are interested in is whether there are + available signals" + +I either would have had to update the comment, or get rid of the check. I +chose to get rid of the check. In fact I don't quite know why it was there. +There will never be available signals for group G2, so we didn't need the +special case. Even if there were, this would just be a spurious wake. This +might have caught some cases where the count has wrapped around, but it +wouldn't reliably do that, (and even if it did, why would you want to force a +sleep in that case?) and we don't support that many concurrent waiters +anyway. Getting rid of it allows us to use one more bit, making us more +robust to wraparound. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: 91bb902f58264a2fd50fbce8f39a9a290dd23706 + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002283.html] + +Signed-off-by: Sunil Dora +--- + nptl/pthread_cond_broadcast.c | 4 ++-- + nptl/pthread_cond_common.c | 26 ++++++++++---------------- + nptl/pthread_cond_signal.c | 2 +- + nptl/pthread_cond_wait.c | 14 +++++--------- + 4 files changed, 18 insertions(+), 28 deletions(-) + +diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c +index a0743558..ef0943cd 100644 +--- a/nptl/pthread_cond_broadcast.c ++++ b/nptl/pthread_cond_broadcast.c +@@ -57,7 +57,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) + { + /* Add as many signals as the remaining size of the group. */ + atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, +- cond->__data.__g_size[g1] << 1); ++ cond->__data.__g_size[g1]); + cond->__data.__g_size[g1] = 0; + + /* We need to wake G1 waiters before we switch G1 below. */ +@@ -73,7 +73,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) + { + /* Step (3): Send signals to all waiters in the old G2 / new G1. */ + atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, +- cond->__data.__g_size[g1] << 1); ++ cond->__data.__g_size[g1]); + cond->__data.__g_size[g1] = 0; + /* TODO Only set it if there are indeed futex waiters. */ + do_futex_wake = true; +diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c +index 3baac4da..e48f9143 100644 +--- a/nptl/pthread_cond_common.c ++++ b/nptl/pthread_cond_common.c +@@ -208,9 +208,9 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + behavior. + Note that this works correctly for a zero-initialized condvar too. */ + unsigned int old_orig_size = __condvar_get_orig_size (cond); +- uint64_t old_g1_start = __condvar_load_g1_start_relaxed (cond) >> 1; +- if (((unsigned) (wseq - old_g1_start - old_orig_size) +- + cond->__data.__g_size[g1 ^ 1]) == 0) ++ uint64_t old_g1_start = __condvar_load_g1_start_relaxed (cond); ++ uint64_t new_g1_start = old_g1_start + old_orig_size; ++ if (((unsigned) (wseq - new_g1_start) + cond->__data.__g_size[g1 ^ 1]) == 0) + return false; + + /* We have to consider the following kinds of waiters: +@@ -221,16 +221,10 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + are not affected. + * Waiters in G1 have already received a signal and been woken. */ + +- /* Update __g1_start, which closes this group. The value we add will never +- be negative because old_orig_size can only be zero when we switch groups +- the first time after a condvar was initialized, in which case G1 will be +- at index 1 and we will add a value of 1. Relaxed MO is fine because the +- change comes with no additional constraints that others would have to +- observe. */ +- __condvar_add_g1_start_relaxed (cond, +- (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); +- +- unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; ++ /* Update __g1_start, which closes this group. Relaxed MO is fine because ++ the change comes with no additional constraints that others would have ++ to observe. */ ++ __condvar_add_g1_start_relaxed (cond, old_orig_size); + + /* At this point, the old G1 is now a valid new G2 (but not in use yet). + No old waiter can neither grab a signal nor acquire a reference without +@@ -242,13 +236,13 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, + g1 ^= 1; + *g1index ^= 1; + +- /* Now advance the new G1 g_signals to the new lowseq, giving it ++ /* Now advance the new G1 g_signals to the new g1_start, giving it + an effective signal count of 0 to start. */ +- atomic_store_release (cond->__data.__g_signals + g1, lowseq); ++ atomic_store_release (cond->__data.__g_signals + g1, (unsigned)new_g1_start); + + /* These values are just observed by signalers, and thus protected by the + lock. */ +- unsigned int orig_size = wseq - (old_g1_start + old_orig_size); ++ unsigned int orig_size = wseq - new_g1_start; + __condvar_set_orig_size (cond, orig_size); + /* Use and addition to not loose track of cancellations in what was + previously G2. */ +diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c +index a9bc10dc..07427369 100644 +--- a/nptl/pthread_cond_signal.c ++++ b/nptl/pthread_cond_signal.c +@@ -80,7 +80,7 @@ ___pthread_cond_signal (pthread_cond_t *cond) + release-MO store when initializing a group in __condvar_switch_g1 + because we use an atomic read-modify-write and thus extend that + store's release sequence. */ +- atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 2); ++ atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 1); + cond->__data.__g_size[g1]--; + /* TODO Only set it if there are indeed futex waiters. */ + do_futex_wake = true; +diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c +index 40a74342..d7e073ab 100644 +--- a/nptl/pthread_cond_wait.c ++++ b/nptl/pthread_cond_wait.c +@@ -84,7 +84,7 @@ __condvar_cancel_waiting (pthread_cond_t *cond, uint64_t seq, unsigned int g, + not hold a reference on the group. */ + __condvar_acquire_lock (cond, private); + +- uint64_t g1_start = __condvar_load_g1_start_relaxed (cond) >> 1; ++ uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + if (g1_start > seq) + { + /* Our group is closed, so someone provided enough signals for it. +@@ -259,7 +259,6 @@ __condvar_cleanup_waiting (void *arg) + * Waiters fetch-add while having acquire the mutex associated with the + condvar. Signalers load it and fetch-xor it concurrently. + __g1_start: Starting position of G1 (inclusive) +- * LSB is index of current G2. + * Modified by signalers while having acquired the condvar-internal lock + and observed concurrently by waiters. + __g1_orig_size: Initial size of G1 +@@ -280,11 +279,9 @@ __condvar_cleanup_waiting (void *arg) + * Reference count used by waiters concurrently with signalers that have + acquired the condvar-internal lock. + __g_signals: The number of signals that can still be consumed, relative to +- the current g1_start. (i.e. bits 31 to 1 of __g_signals are bits +- 31 to 1 of g1_start with the signal count added) ++ the current g1_start. (i.e. g1_start with the signal count added) + * Used as a futex word by waiters. Used concurrently by waiters and + signalers. +- * LSB is currently reserved and 0. + __g_size: Waiters remaining in this group (i.e., which have not been + signaled yet. + * Accessed by signalers and waiters that cancel waiting (both do so only +@@ -391,9 +388,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + too. */ + unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); +- unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + +- if (seq < (g1_start >> 1)) ++ if (seq < g1_start) + { + /* If the group is closed already, + then this waiter originally had enough extra signals to +@@ -406,13 +402,13 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, + by now, perhaps in the process of switching back to an older + G2, but in either case we're allowed to consume the available + signal and should not block anymore. */ +- if ((int)(signals - lowseq) >= 2) ++ if ((int)(signals - (unsigned int)g1_start) > 0) + { + /* Try to grab a signal. See above for MO. (if we do another loop + iteration we need to see the correct value of g1_start) */ + if (atomic_compare_exchange_weak_acquire ( + cond->__data.__g_signals + g, +- &signals, signals - 2)) ++ &signals, signals - 1)) + break; + else + continue; +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 265dcb9129..26e8d8c408 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -70,6 +70,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-6.patch \ file://0026-PR25847-7.patch \ file://0026-PR25847-8.patch \ + file://0026-PR25847-9.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \ From patchwork Tue Oct 14 22:44:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Sakoman X-Patchwork-Id: 72334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id B66C5CCD190 for ; Tue, 14 Oct 2025 22:45:27 +0000 (UTC) Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by mx.groups.io with SMTP id smtpd.web11.2592.1760481920577477640 for ; Tue, 14 Oct 2025 15:45:20 -0700 Authentication-Results: mx.groups.io; dkim=pass header.i=@sakoman-com.20230601.gappssmtp.com header.s=20230601 header.b=s7DmcQqh; spf=softfail (domain: sakoman.com, ip: 209.85.215.182, mailfrom: steve@sakoman.com) Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-b5579235200so3745768a12.3 for ; Tue, 14 Oct 2025 15:45:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sakoman-com.20230601.gappssmtp.com; s=20230601; t=1760481920; x=1761086720; darn=lists.openembedded.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=w1aXkrmHyxb5VpV+/KxlTxZT80ODHkka8Lm0KNSYaTM=; b=s7DmcQqhlsew3h5baENpECQ/BAPuhwads7H0hwU1x8YM4ngpnomWljEG/Ws7EHpFOr F54spTjax7UB42rRmamIKZbtcyAcD1Lr3MgpDQfapfFMMWObGUOpUQ6jnd5zSwI3kGei foQuzqvzMEbLfeBxtMfLEVyYNa547NRY0yKesTxZGGj39+eeZdQJCYafLxYL8tPer6uD 6pFjW+tCdhxSTKeHCo6X/5EwZHFs9q6se4KYr3TcR5CofzUlOXwFGuRCJaPYNZ224Dkf 1EiKzJlbjPtFZu8hxxO/Ou1qCh1I5DpUDRP0miXfRTjfPFGEeOaNz2lJd3nuGz5myVKg oWSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760481920; x=1761086720; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w1aXkrmHyxb5VpV+/KxlTxZT80ODHkka8Lm0KNSYaTM=; b=ZoZYeh/0yag5wxAA2k1LDsCI9Mr7T42awm5oQoiXm0lTwROaKxj4XafJNRpIxZA/8c 9Ae8cerAjKAKPOzRWEZOqEf2ILDmJEsr6z7vZQ9LBe7OcP/P3gYwOc59WW+f0KZ8PojB 0GNOp7WajY3K9PRRcbnyMPH8R+6OugmcjitYAunGZPNFAAL+QkiW3PYGIFKIL2Ik2fnn iGWClEA7aWVQTWz/NCKZY4YyY/osZ1ExIoryEZDygLPkmeWRHCubJe0AhmKwrrQCe8sq 32zdUeFBwenYrHL3FmM5/oFssTJNcG+2dNy2V20ueHAlOiwilSaaBZsR2XErX9GjE/pj MZ0A== X-Gm-Message-State: AOJu0Yzy5VDfZ2l7EZJBcw1iuL6UsJebu22f/eUR1fN9bHE42vBSLwx8 WZWo66S393dY3cM5YZaJNqktLzzsRhbwG9D0dA/+6cz5ZwXDoIVMBxkDF5XVliu6v7VXg0XKClV 9nWW7 X-Gm-Gg: ASbGncvoqcO544eysfIPMaXzGnzrb7AhuOCYc4bl8h//5VSwFkf8OUntOZvJVgNB4JY PBl3hfR85mGOeZPIsItsSs2gREL9p6lgjnx79xuY35Cqk2aMXUMph52Majvo867K/lkpBHvO0eX EY0nH9XTxR+KSceiKfAgkEzvkJitn+7q0RvOJS/4cawr1DKib0r+Xg/Dj/QLZ8GvrgkPK61ZN40 Vzl54LOfwV+dUM9wf0v/Qj2Ya0n+q/G4zER7vheWrGSA8Q4/xpSQGsOw6DcRMfIdv5jBOjJLhjx XBA/4W2F2+Xfkl2FIia0NrE87jzuVWU9MnD5l28lyP7Dx6frzpZ7DyY4InH/EMCuHQCEPxo39Gj bPosOtrkRU3wmhsWAGbdS1nKd4c55mcvh X-Google-Smtp-Source: AGHT+IEuqirCFBapRMpJCETf/iwBrsuPS9vNnryjPVZKN4h6rRXEHaqWSAfVEgeTEzoz4Psb0nOj+g== X-Received: by 2002:a17:902:fc4f:b0:280:fe18:8479 with SMTP id d9443c01a7336-290272e0ac5mr287758835ad.51.1760481919826; Tue, 14 Oct 2025 15:45:19 -0700 (PDT) Received: from hexa.. ([2602:feb4:3b:2100:ebea:520a:7699:bba7]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29034e20479sm174847365ad.47.2025.10.14.15.45.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Oct 2025 15:45:19 -0700 (PDT) From: Steve Sakoman To: openembedded-core@lists.openembedded.org Subject: [OE-core][kirkstone 14/14] glibc: : PTHREAD_COND_INITIALIZER compatibility with pre-2.41 versions (bug 32786) Date: Tue, 14 Oct 2025 15:44:51 -0700 Message-ID: <8f1000d9dad5e51f08a40b0f6650204425cc8efb.1760481775.git.steve@sakoman.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 List-Id: X-Webhook-Received: from li982-79.members.linode.com [45.33.32.79] by aws-us-west-2-korg-lkml-1.web.codeaurora.org with HTTPS for ; Tue, 14 Oct 2025 22:45:27 -0000 X-Groupsio-URL: https://lists.openembedded.org/g/openembedded-core/message/224868 From: Sunil Dora The following commits have been cherry-picked from Glibc master branch: Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=dbc5a50d12eff4cb3f782129029d04b8a76f58e7 [2] https://sourceware.org/pipermail/libc-stable/2025-July/002282.html Signed-off-by: Sunil Dora Signed-off-by: Steve Sakoman --- .../glibc/glibc/0026-PR25847-10.patch | 54 +++++++++++++++++++ meta/recipes-core/glibc/glibc_2.35.bb | 1 + 2 files changed, 55 insertions(+) create mode 100644 meta/recipes-core/glibc/glibc/0026-PR25847-10.patch diff --git a/meta/recipes-core/glibc/glibc/0026-PR25847-10.patch b/meta/recipes-core/glibc/glibc/0026-PR25847-10.patch new file mode 100644 index 0000000000..99049d468a --- /dev/null +++ b/meta/recipes-core/glibc/glibc/0026-PR25847-10.patch @@ -0,0 +1,54 @@ +From 4f78382dd671f381db6d1f452e6f1593d17b177e Mon Sep 17 00:00:00 2001 +From: Florian Weimer +Date: Tue, 14 Oct 2025 06:53:40 -0700 +Subject: [PATCH] nptl: PTHREAD_COND_INITIALIZER compatibility with pre-2.41 + versions (bug 32786) + +The new initializer and struct layout does not initialize the +__g_signals field in the old struct layout before the change in +commit c36fc50781995e6758cae2b6927839d0157f213c ("nptl: Remove +g_refs from condition variables"). Bring back fields at the end +of struct __pthread_cond_s, so that they are again zero-initialized. + +The following commits have been cherry-picked from Glibc master branch: +Bug : https://sourceware.org/bugzilla/show_bug.cgi?id=25847 +commit: dbc5a50d12eff4cb3f782129029d04b8a76f58e7 + +Upstream-Status: Submitted +[https://sourceware.org/pipermail/libc-stable/2025-July/002282.html] + +Signed-off-by: Sunil Dora +--- + sysdeps/nptl/bits/thread-shared-types.h | 2 ++ + sysdeps/nptl/pthread.h | 2 +- + 2 files changed, 3 insertions(+), 1 deletion(-) + +diff --git a/sysdeps/nptl/bits/thread-shared-types.h b/sysdeps/nptl/bits/thread-shared-types.h +index 6f17afa4..2354ea21 100644 +--- a/sysdeps/nptl/bits/thread-shared-types.h ++++ b/sysdeps/nptl/bits/thread-shared-types.h +@@ -99,6 +99,8 @@ struct __pthread_cond_s + unsigned int __g1_orig_size; + unsigned int __wrefs; + unsigned int __g_signals[2]; ++ unsigned int __unused_initialized_1; ++ unsigned int __unused_initialized_2; + }; + + typedef unsigned int __tss_t; +diff --git a/sysdeps/nptl/pthread.h b/sysdeps/nptl/pthread.h +index bbb36540..8d6d24ff 100644 +--- a/sysdeps/nptl/pthread.h ++++ b/sysdeps/nptl/pthread.h +@@ -152,7 +152,7 @@ enum + + + /* Conditional variable handling. */ +-#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0} } } ++#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0}, 0, 0 } } + + + /* Cleanup buffers */ +-- +2.49.0 + diff --git a/meta/recipes-core/glibc/glibc_2.35.bb b/meta/recipes-core/glibc/glibc_2.35.bb index 26e8d8c408..1b5830699f 100644 --- a/meta/recipes-core/glibc/glibc_2.35.bb +++ b/meta/recipes-core/glibc/glibc_2.35.bb @@ -71,6 +71,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \ file://0026-PR25847-7.patch \ file://0026-PR25847-8.patch \ file://0026-PR25847-9.patch \ + file://0026-PR25847-10.patch \ \ file://0001-Revert-Linux-Implement-a-useful-version-of-_startup_.patch \ file://0002-get_nscd_addresses-Fix-subscript-typos-BZ-29605.patch \