From 9e05560321b61cf8a85be5a004af67eebc481c4f Mon Sep 17 00:00:00 2001 From: Rogina Lee <363501705@qq.com> Date: Mon, 8 Apr 2024 16:54:59 +0800 Subject: [PATCH] add rvlwn-85 Signed-off-by: Rogina Lee <363501705@qq.com> --- _posts/2024-04-08-16-54-59-rvlwn-85.md | 1061 ++++++++++++++++++++++++ 1 file changed, 1061 insertions(+) create mode 100644 _posts/2024-04-08-16-54-59-rvlwn-85.md diff --git a/_posts/2024-04-08-16-54-59-rvlwn-85.md b/_posts/2024-04-08-16-54-59-rvlwn-85.md new file mode 100644 index 000000000..4626e8c1a --- /dev/null +++ b/_posts/2024-04-08-16-54-59-rvlwn-85.md @@ -0,0 +1,1061 @@ +--- +layout: weekly +author: '呀呀呀' +title: 'RISC-V Linux 内核及周边技术动态第 85 期' +draft: false +group: 'news' +album: 'RISC-V Linux' +license: 'cc-by-nc-nd-4.0' +permalink: /rvlwn-85/ +description: 'RISC-V Linux 内核及周边技术动态第 85 期' +category: + - 开源项目 + - Risc-V +tags: + - Linux + - RISC-V +--- + +> 时间:20240331
+> 编辑:晓怡
+> 仓库:[RISC-V Linux 内核技术调研活动](https://gitee.com/tinylab/riscv-linux)
+> 赞助:PLCT Lab, ISCAS + +## 内核动态 + +### RISC-V 架构支持 + +**[v1: bpf-next: Add 12-argument support for RV64 bpf trampoline](http://lore.kernel.org/linux-riscv/20240331092405.822571-1-pulehui@huaweicloud.com/)** + +> This patch adds 12 function arguments support for riscv64 bpf +> trampoline. The current bpf trampoline supports <= sizeof(u64) bytes +> scalar arguments [0] and <= 16 bytes struct arguments [1]. +> + +**[v1: iio: adc: add ADC driver for XuanTie TH1520 SoC](http://lore.kernel.org/linux-riscv/20240329200241.4122000-1-wefu@redhat.com/)** + +> This patchset adds initial support for XuanTie TH1520 ADC driver. +> This is modified from XuanTie TH1520 Linux_SDK_V1.4.2(linux v5.10.113) +> The original author is Fugang Duan +> + +**[v3: clk: starfive: jh7100: Use clk_hw for external input clocks](http://lore.kernel.org/linux-riscv/2082b46ab08755b1b66e0630a61619acac9d883f.1711714613.git.geert@linux-m68k.org/)** + +> The Starfive JH7100 clock driver does not use the DT "clocks" property +> to find the external main input clock, but instead relies on the name of +> the actual clock provider ("osc_sys"). This is fragile, and caused +> breakage when sanitizing clock node names in DTS. +> + +**[v4: Unified cross-architecture kernel-mode FPU API](http://lore.kernel.org/linux-riscv/20240329072441.591471-1-samuel.holland@sifive.com/)** + +> This series unifies the kernel-mode FPU API across several architectures +> by wrapping the existing functions (where needed) in consistently-named +> functions placed in a consistent header location, with mostly the same +> semantics: they can be called from preemptible or non-preemptible task +> context, +> + +**[v13: riscv: sophgo: add clock support for sg2042](http://lore.kernel.org/linux-riscv/cover.1711692169.git.unicorn_wang@outlook.com/)** + +> This series adds clock controller support for sophgo sg2042. +> + +**[v2: riscv control-flow integrity for usermode](http://lore.kernel.org/linux-riscv/20240329044459.3990638-1-debug@rivosinc.com/)** + +> I had sent RFC patchset early this year (January) [7] to enable CPU assisted +> control-flow integrity for usermode on riscv. Since then I've been able to do +> more testing of the changes. As part of testing effort, compiled a rootfs with +> shadow stack and landing pad enabled (libraries and binaries) and booted to +> shell. +> + +**[v6: riscv: sophgo: add dmamux support for Sophgo CV1800/SG2000 SoCs](http://lore.kernel.org/linux-riscv/IA1PR20MB4953F0FAED4373660C7873A2BB3A2@IA1PR20MB4953.namprd20.prod.outlook.com/)** + +> Add dma multiplexer support for the Sophgo CV1800/SG2000 SoCs. +> +> As the syscon device of CV1800 have a usb phy subdevices. The +> binding of the syscon can not be complete without the usb phy +> is finished. As a result, the binding of syscon is removed +> and will be evolved in its original series after the usb phy +> binding is fully explored. +> + +**[v1: riscv: ftrace: make stack walk more robust.](http://lore.kernel.org/linux-riscv/20240328184020.34278-1-puranjay12@gmail.com/)** + +> The current stack walker in riscv implemented in walk_stackframe() provides +> the PC to a callback function when it unwinds the stacks. This doesn't +> allow implementing stack walkers that need access to more information like +> the frame pointer, etc. +> + +**[v1: riscv: Kconfig.socs: Deprecate SOC_CANAAN and use SOC_CANAAN_K210 for K210](http://lore.kernel.org/linux-riscv/tencent_2E60E33C1F88A090B6B3A332AE528C6B8806@qq.com/)** + +> Since SOC_FOO should be deprecated from patch [1], and cleanup for other +> SoCs is already in the mailing list [2,3,4], so we deprecate the use of +> SOC_CANAAN and use ARCH_CANAAN for SoCs vendored by Canaan instead from now +> on. +> + +**[v1: ftrace: riscv: move from REGS to ARGS](http://lore.kernel.org/linux-riscv/20240328141845.128645-1-puranjay12@gmail.com/)** + +> This commit replaces riscv's support for FTRACE_WITH_REGS with support +> for FTRACE_WITH_ARGS. This is required for the ongoing effort to stop +> relying on stop_machine() for RISCV's implementation of ftrace. +> + +**[v16: Refactoring Microchip PCIe driver and add StarFive PCIe](http://lore.kernel.org/linux-riscv/20240328091835.14797-1-minda.chen@starfivetech.com/)** + +> This patchset final purpose is add PCIe driver for StarFive JH7110 SoC. +> JH7110 using PLDA XpressRICH PCIe IP. Microchip PolarFire Using the +> same IP and have commit their codes, which are mixed with PLDA +> controller codes and Microchip platform codes. +> + +**[v2: riscv: Call secondary mmu notifier when flushing the tlb](http://lore.kernel.org/linux-riscv/20240328073838.8776-1-alexghiti@rivosinc.com/)** + +> This is required to allow the IOMMU driver to correctly flush its own +> TLB. +> + +**[回复: v9: Add timer driver for StarFive JH7110 RISC-V SoC](http://lore.kernel.org/linux-riscv/NTZPR01MB0986A4F77EF371F7CBEE3A51E13BA@NTZPR01MB0986.CHNPR01.prod.partner.outlook.cn/)** + +> > This patch serises are to add timer driver for the StarFive JH7110 RISC-V SoC. +> > The first patch adds documentation to describe device tree bindings. The +> > subsequent patch adds timer driver and support +> + +**[v2: riscv: Various text patching improvements](http://lore.kernel.org/linux-riscv/20240327160520.791322-1-samuel.holland@sifive.com/)** + +> Here are a few changes to minimize calls to stop_machine() and +> flush_icache_*() in the various text patching functions, as well as +> to simplify the code. +> + +**[v1: Convert Tasklets to BH Workqueues](http://lore.kernel.org/linux-riscv/20240327160314.9982-1-apais@linux.microsoft.com/)** + +> This patch series represents a significant shift in how asynchronous +> execution in the bottom half (BH) context is handled within the kernel. +> Traditionally, tasklets have been the go-to mechanism for such operations. +> + +**[v1: clocksouce/timer-clint|riscv: some improvements](http://lore.kernel.org/linux-riscv/20240327153502.2133-1-jszhang@kernel.org/)** + +> This series is a simple improvement for timer-clint and timer-riscv: +> +> Add set_state_shutdown for timer-clint, this hook is used when +> switching clockevent from timer-clint to another timer. +> + +**[v2: riscv: access_ok() optimization](http://lore.kernel.org/linux-riscv/20240327143858.711792-1-samuel.holland@sifive.com/)** + +> This series optimizes access_ok() by defining TASK_SIZE_MAX. At Alex's +> suggestion, I also tried making TASK_SIZE constant (specifically by +> making PGDIR_SHIFT a variable instead of a ternary expression, then +> replacing the load with an immediate using ALTERNATIVE). +> + +**[v1: BeagleV Fire support](http://lore.kernel.org/linux-riscv/20240327-parkway-dodgy-f0fe1fa20892@spud/)** + +> Wee series adding support for the BeagleV Fire. I've had a dts sitting +> locally for this for over a year for testing Auto Update and I meant to +> submit something to mainline once the board got announced publicly, but +> only got around to that now. +> + +**[FAILED: Patch "clocksource/drivers/timer-riscv: Clear timer interrupt on timer initialization" failed to apply to 6.1-stable tree](http://lore.kernel.org/linux-riscv/20240327121249.2829814-1-sashal@kernel.org/)** + +> The patch below does not apply to the 6.1-stable tree. +> If someone wants it applied there, or to any other stable or longterm +> tree, then please email the backport, including the original git commit +> id to . +> + +**[v1: RISC-V: Test th.mxstatus.MAEE bit before enabling MAEE](http://lore.kernel.org/linux-riscv/20240327103130.3651950-1-christoph.muellner@vrull.eu/)** + +> Currently, the Linux kernel suffers from a boot regression when running +> on the c906 QEMU emulation. Details have been reported here by Björn Töpel: +> https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg04766.html +> + +**[v12: riscv: sophgo: add clock support for sg2042](http://lore.kernel.org/linux-riscv/cover.1711527932.git.unicorn_wang@outlook.com/)** + +> This series adds clock controller support for sophgo sg2042. +> + +**[v1: cache: sifive_ccache: Partially convert to a platform driver](http://lore.kernel.org/linux-riscv/20240327054537.424980-1-samuel.holland@sifive.com/)** + +> Commit 8ec99b033147 ("irqchip/sifive-plic: Convert PLIC driver into a +> platform driver") broke ccache initialization because the PLIC IRQ +> + +### 进程调度 + +**[v1: sched/fair: Reset vlag in dequeue when PLAGE_LAG is disabled](http://lore.kernel.org/lkml/20240329091933.340739-1-spring.cxz@gmail.com/)** + +> The vlag is calculated in dequeue when PLAGE_LAG is disabled. If we +> enable the PLACE_LAG at some point, the old vlag of process will +> affect itself and other process. These are not in line with our +> original intention, where we expect the vlag of all processes to be +> calculated from 0 after the enable PLAGE_LAG. +> + +**[v1: perf sched: Rename switches to count and add usage description, options for latency](http://lore.kernel.org/lkml/20240328090005.8321-1-vineethr@linux.ibm.com/)** + +> Rename 'Switches' to 'Count' and document metrics shown for perf +> sched latency output. Also add options possible with perf sched +> latency. +> + +**[v2: RESEND: sched/fair: simplify __calc_delta()](http://lore.kernel.org/lkml/20240328011935.5894-1-daweilics@gmail.com/)** + +> Commit 5e963f2bd4654a202a8a05aa3a86cb0300b10e6c ("sched/fair: Commit to +> EEVDF") removed __calc_delta()'s use case where the input weight is not +> equal to NICE_0_LOAD. Now that weight is always NICE_0_LOAD, it is not +> required to have it as an input parameter. NICE_0_LOAD could be +> incorporated in __calc_delta() directly. +> + +**[v1: sched/fair: allow disabling newidle_balance with sched_relax_domain_level](http://lore.kernel.org/lkml/20240328004738.DhfLagcoOWIFY3BYMJMt-KrcWu0dKNz-0ei9jvEvTVg@z/)** + +> During the upgrade from Linux 5.4 we found a small (around 3%) +> performance regression which was tracked to commit +> c5b0a7eefc70150caf23e37bc9d639c68c87a097 +> + +**[v1: trace/sched: add tgid for sched_wakeup_template](http://lore.kernel.org/lkml/20240327084948.GA28114@didi-ThinkCentre-M930t-N000/)** + +> By doing this, we are able to filter tasks by tgid while we are +> tracing wakeup events by ebpf or other methods. +> + +**[v1: sched/fair: Combine EAS check with overutilized access](http://lore.kernel.org/lkml/20240326152616.380999-1-sshegde@linux.ibm.com/)** + +> So modify the helper function to return this pattern. This is more +> readable code as it would say, do something when root domain is not +> overutilized. This function always return true when EAS is disabled. +> + +**[v1: sched/fair: Simplify continue_balancing for newidle](http://lore.kernel.org/lkml/20240325153926.274284-1-sshegde@linux.ibm.com/)** + +> newidle(CPU_NEWLY_IDLE) balancing doesn't stop the load balancing if the +> continue_balancing flag is reset. Other two balancing (IDLE, BUSY) do +> that. newidle balance stops the load balancing if rq has a task or there +> is wakeup pending. The same checks are present in should_we_balance for +> newidle. Hence use the return value and simplify continue_balancing +> mechanism for newidle. Update the comment surrounding it as well. +> + +**[v1: sched/eevdf: Curb wakeup preemption further](http://lore.kernel.org/lkml/20240325060226.1540-1-kprateek.nayak@amd.com/)** + +> Bisection showed that the regression started at commit 147f3efaa241 +> ("sched/fair: Implement an EEVDF-like scheduling policy") and this was +> reported [2]. Further narrowing down than the commit is hard due to the +> extent of changes in the commit. EEVDF has seen extensive development +> since, but the regression persists. +> + +**[[GIT pull] sched/urgent for v6.9-rc1](http://lore.kernel.org/lkml/171129691660.2804823.9714349244324963954.tglx@xen13/)** + +> A single update for the documentation of the base_slice_ns tunable to +> clarify that any value which is less than the tick slice has no effect +> because the scheduler tick is not guaranteed to happen within the set time +> slice. +> + +### 内存管理 + +**[v1: SLUB: improve filling cpu partial a bit in get_partial_node()](http://lore.kernel.org/linux-mm/20240331021926.2732572-1-xiongwei.song@windriver.com/)** + +> This series is to remove the unnecessary check for filling cpu partial +> and improve the readability. +> +> Introduce slub_get_cpu_partial() and dummy function to prevent compiler +> warning with CONFIG_SLUB_CPU_PARTIAL disabled. This is done in patch 2. +> Use the helper in patch 3 and 4. +> + +**[Patch "mm/migrate: set swap entry values of THP tail pages properly." has been added to the 6.1-stable tree](http://lore.kernel.org/linux-mm/2024033050-reconcile-qualified-63fe@gregkh/)** + +> The tail pages in a THP can have swap entry information stored in their +> private field. When migrating to a new page, all tail pages of the new +> page need to update ->private to avoid future data corruption. +> + +**[Patch "mm/migrate: set swap entry values of THP tail pages properly." has been added to the 5.15-stable tree](http://lore.kernel.org/linux-mm/2024033040-scorecard-exploring-dfe0@gregkh/)** + +> This is a note to let you know that I've just added the patch titled +> +> mm/migrate: set swap entry values of THP tail pages properly. +> +> to the 5.15-stable tree which can be found at: +> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary +> + +**[v2: mm/slub: Reduce memory consumption in extreme scenarios](http://lore.kernel.org/linux-mm/20240330082335.29710-1-chenjun102@huawei.com/)** + +> When kmalloc_node() is called without __GFP_THISNODE and the target node +> lacks sufficient memory, SLUB allocates a folio from a different node +> other than the requested node, instead of taking a partial slab from it. +> + +**[v12: Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support](http://lore.kernel.org/linux-mm/20240329225835.400662-1-michael.roth@amd.com/)** + +> This patchset is also available at: +> +> https://github.com/amdese/linux/commits/snp-host-v12 +> +> and is based on top of the following series: +> + +**[v2: selftests/mm: include strings.h for ffsl](http://lore.kernel.org/linux-mm/20240329185814.16304-1-edliaw@google.com/)** + +> Got a compilation error on Android for ffsl after 91b80cc5b39f +> ("selftests: mm: fix map_hugetlb failure on 64K page size systems") +> included vm_util.h. +> + +**[v1: mm: alloc_anon_folio: avoid doing vma_thp_gfp_mask in fallback cases](http://lore.kernel.org/linux-mm/20240329073750.20012-1-21cnbao@gmail.com/)** + +> Fallback rates surpassing 90% have been observed on phones utilizing 64KiB +> CONT-PTE mTHP. In these scenarios, when one out of every 16 PTEs fails +> to allocate large folios, the remaining 15 PTEs fallback. Consequently, +> invoking vma_thp_gfp_mask seems redundant in such cases. Furthermore, +> abstaining from its use can also contribute to improved code readability. +> + +**[v1: mm: huge_memory: add the missing folio_test_pmd_mappable() for THP split statistics](http://lore.kernel.org/linux-mm/a5341defeef27c9ac7b85c97f030f93e4368bbc1.1711694852.git.baolin.wang@linux.alibaba.com/)** + +> Now the mTHP can also be split or added into the deferred list, so add +> folio_test_pmd_mappable() validation for PMD mapped THP, to avoid confusion +> with PMD mapped THP related statistics. +> + +**[v2: support multi-size THP numa balancing](http://lore.kernel.org/linux-mm/cover.1711683069.git.baolin.wang@linux.alibaba.com/)** + +> This patchset tries to support mTHP numa balancing, as a simple solution +> to start, the NUMA balancing algorithm for mTHP will follow the THP strategy +> as the basic support. Please find details in each patch. +> + +**[v9: Improved Memory Tier Creation for CPUless NUMA Nodes](http://lore.kernel.org/linux-mm/20240329053353.309557-1-horenchuang@bytedance.com/)** + +> When a memory device, such as CXL1.1 type3 memory, is emulated as +> normal memory (E820_TYPE_RAM), the memory device is indistinguishable +> from normal DRAM in terms of memory tiering with the current implementation. +> The current memory tiering assigns all detected normal memory nodes +> to the same DRAM tier. This results in normal memory devices with +> + +**[v1: mempool: Modify mismatched function name](http://lore.kernel.org/linux-mm/20240329030118.68492-1-jiapeng.chong@linux.alibaba.com/)** + +> No functional modification involved. +> +> mm/mempool.c:245: warning: expecting prototype for mempool_init(). Prototype was for mempool_init_noprof() instead. +> mm/mempool.c:271: warning: expecting prototype for mempool_create_node(). Prototype was for mempool_create_node_noprof() instead. +> + +**[v6: netfs, cifs: Delegate high-level I/O to netfslib](http://lore.kernel.org/linux-mm/20240328165845.2782259-1-dhowells@redhat.com/)** + +> Here are patches to convert cifs to use the netfslib library. I've tested +> them with and without a cache. Unfortunately, if "-o fsc" is specified a leak +> of a tcon object shows up, particularly with the generic/013 xfstest that +> prevents further testing. I've investigated this and found that the tcon leak +> is actually present upstream, but just goes unnoticed unless it also pins an +> fscache volume cookie. +> + +**[v1: netfs, afs, 9p, cifs: Rework netfs to use ->writepages() to copy to cache](http://lore.kernel.org/linux-mm/20240328163424.2781320-1-dhowells@redhat.com/)** + +> The primary purpose of these patches is to rework the netfslib writeback +> implementation such that pages read from the cache are written to the cache +> through ->writepages(), thereby allowing the fscache page flag to be +> retired. +> + +**[v2: mm: add per-order mTHP alloc_success and alloc_fail counters](http://lore.kernel.org/linux-mm/20240328095139.143374-1-21cnbao@gmail.com/)** + +> Profiling a system blindly with mTHP has become challenging due +> to the lack of visibility into its operations. Presenting the +> success rate of mTHP allocations appears to be pressing need. +> + +**[v11: Support page table check PowerPC](http://lore.kernel.org/linux-mm/20240328045535.194800-3-rmclure@linux.ibm.com/)** + +> Support page table check on all PowerPC platforms. This works by +> serialising assignments, reassignments and clears of page table +> entries at each level in order to ensure that anonymous mappings +> have at most one writable consumer, and likewise that file-backed +> mappings are not simultaneously also anonymous mappings. +> + +**[v1: mTHP-friendly compression in zsmalloc and zram based on multi-pages](http://lore.kernel.org/linux-mm/20240327214816.31191-1-21cnbao@gmail.com/)** + +> mTHP is generally considered to potentially waste memory due to fragmentation, +> but it may also serve as a source of memory savings. +> When large folios are compressed at a larger granularity, we observe a remarkable +> decrease in CPU utilization and a significant improvement in compression ratios. +> + +**[v3: mm: workingset reporting](http://lore.kernel.org/linux-mm/20240327213108.2384666-1-yuanchu@google.com/)** + +> This patch series provides workingset reporting of user pages in +> lruvecs, of which coldness can be tracked by accessed bits and fd +> references. However, the concept of workingset applies generically to +> all types of memory, which could be kernel slab caches, discardable +> userspace caches (databases), or CXL.mem. Therefore, data sources might +> come from slab shrinkers, device drivers, or the userspace. +> + +**[v1: mm: Use rwsem assertion macros for mmap_lock](http://lore.kernel.org/linux-mm/20240327190701.1082560-1-willy@infradead.org/)** + +> This slightly strengthens our write assertion when lockdep is disabled. +> It also downgrades us from BUG_ON to WARN_ON, but I think that's an +> improvement. I don't think dumping the mm_struct was all that valuable; +> the call chain is what's important. +> + +**[v2: mm, netfs: Provide a means of invalidation without using launder_folio](http://lore.kernel.org/linux-mm/2506007.1711562145@warthog.procyon.org.uk/)** + +> mm, netfs: Provide a means of invalidation without using launder_folio +> +> Implement a replacement for launder_folio. The key feature of +> invalidate_inode_pages2() is that it locks each folio individually, unmaps +> it to prevent mmap'd accesses interfering and calls the ->launder_folio() +> address_space op to flush it. This has problems: firstly, each folio is +> written individually as one or more small writes; secondly, adjacent folios +> cannot be added so easily into the laundry; thirdly, it's yet another op to +> implement. +> + +### 文件系统 + +**[v1: ext4: support adding multi-delalloc blocks](http://lore.kernel.org/linux-fsdevel/20240330120236.3789589-1-yi.zhang@huaweicloud.com/)** + +> This patch series is the part 2 prepartory changes of the buffered IO +> iomap conversion, I picked them out from my buffered IO iomap conversion +> RFC series v3[1], and add bigalloc feature support. +> + +**[v4: Fuse-BPF and plans on merging with Fuse Passthrough](http://lore.kernel.org/linux-fsdevel/20240329015351.624249-1-drosen@google.com/)** + +> I've recently gotten some time to re-focus on fuse-bpf efforts, and +> had some questions on how to best integrate with recent changes that +> have landed in the last year. I've included a rebased version (ontop +> of bpf-next e63985ecd226 ("bpf, riscv64/cfi: Support kCFI + BPF on +> riscv64") of the old patchset for reference here. +> + +**[v1: fuse: Add initial support for fs-verity](http://lore.kernel.org/linux-fsdevel/20240328205822.1007338-1-richardfung@google.com/)** + +> Note this doesn't include support for FS_IOC_READ_VERITY_METADATA. I +> don't know of an existing use of it and I figured it would be better +> not to include code that wouldn't be used and tested. However if you feel +> like it should be added let me know. +> + +### 网络设备 + +**[v1: net-next: ipvlan: handle NETDEV_DOWN event](http://lore.kernel.org/netdev/1711892489-27931-2-git-send-email-venkat.x.venkatsubra@oracle.com/)** + +> In case of stacked devices, to help propagate the down +> link state from the parent/root device (to this leaf device), +> handle NETDEV_DOWN event like it is done now for NETDEV_UP. +> + +**[v1: net-next: tcp/dccp: complete lockless accesses to sk->sk_max_ack_backlog](http://lore.kernel.org/netdev/20240331090521.71965-1-kerneljasonxing@gmail.com/)** + +> Since commit 099ecf59f05b ("net: annotate lockless accesses to +> sk->sk_max_ack_backlog") decided to handle the sk_max_ack_backlog +> locklessly, there is one more function mostly called in TCP/DCCP +> cases. So this patch completes it:) +> + +**[v2: net-next: Avoid explicit cpumask var allocation on stack](http://lore.kernel.org/netdev/20240331053441.1276826-1-dawei.li@shingroup.cn/)** + +> This is v2 of previous series[1] about cpumask var on stack for net +> subsystem. +> + +**[v2: net-next: net: Add generic support for netdev LEDs](http://lore.kernel.org/netdev/20240330-v6-8-0-net-next-mv88e6xxx-leds-v4-v2-0-fc5beb9febc5@lunn.ch/)** + +> For some devices, the MAC controls the LEDs in the RJ45 connector, not +> the PHY. This patchset provides generic support for such LEDs, and +> adds the first user, mv88e6xxx. +> + +**[v1: net-next: batman-adv: bypass empty buckets in batadv_purge_orig_ref()](http://lore.kernel.org/netdev/20240330155438.2462326-1-edumazet@google.com/)** + +> Many syzbot reports are pointing to soft lockups in +> batadv_purge_orig_ref() [1] +> +> Root cause is unknown, but we can avoid spending too much +> time there and perhaps get more interesting reports. +> + +**[v2: net-next: caif: Use UTILITY_NAME_LENGTH instead of hard-coding 16](http://lore.kernel.org/netdev/8c1160501f69b64bb2d45ce9f26f746eec80ac77.1711787352.git.christophe.jaillet@wanadoo.fr/)** + +> UTILITY_NAME_LENGTH is 16. So better use the former when defining the +> 'utility_name' array. This makes the intent clearer when it is used around +> line 260. +> + +**[v1: bpf-next: selftests/bpf: Add sockaddr tests for kernel networking](http://lore.kernel.org/netdev/20240329191907.1808635-1-jrife@google.com/)** + +> In a follow up to these patches, +> +> - commit 0bdf399342c5("net: Avoid address overwrite in kernel_connect") +> - commit 86a7e0b69bd5("net: prevent rewrite of msg_name in sock_sendmsg()") +> - commit c889a99a21bf("net: prevent address rewrite in kernel_bind()") +> - commit 01b2885d9415("net: Save and restore msg_namelen in sock_sendmsg") +> + +**[v1: net: mptcp: prevent BPF accessing lowat from a subflow socket.](http://lore.kernel.org/netdev/d8cb7d8476d66cb0812a6e29cd1e626869d9d53e.1711738080.git.pabeni@redhat.com/)** + +> The root cause of the issue is that bpf allows accessing mptcp-level +> proto_ops from a tcp subflow scope. +> +> Fix the issue detecting the problematic call and preventing any action. +> +> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/482 +> + +**[v1: net-next: tools: ynl: add ynl_dump_empty() helper](http://lore.kernel.org/netdev/20240329181651.319326-1-kuba@kernel.org/)** + +> Checking if dump is empty requires a couple of casts. +> Add a convenient wrapper. +> +> Add an example use in the netdev sample, loopback is always +> present so an empty dump is an error. +> + +**[v3: net: virtio_net: Do not send RSS key if it is not supported](http://lore.kernel.org/netdev/20240329171641.366520-1-leitao@debian.org/)** + +> There is a bug when setting the RSS options in virtio_net that can break +> the whole machine, getting the kernel into an infinite loop. +> + +**[v1: net-next: page_pool: allow direct bulk recycling](http://lore.kernel.org/netdev/20240329165507.3240110-1-aleksander.lobakin@intel.com/)** + +> Previously, there was no reliable way to check whether it's safe to use +> direct PP cache. The drivers were passing @allow_direct to the PP +> recycling functions and that was it. Bulk recycling is used by +> xdp_return_frame_bulk() on .ndo_xdp_xmit() frames completion where +> the page origin is unknown, thus the direct recycling has never been +> tried. +> + +**[v1: rhashtable: Improve grammar](http://lore.kernel.org/netdev/20240329-misc-rhashtable-v1-1-5862383ff798@gmx.net/)** + +> Change "a" to "an" according to the usual rules, fix an "if" that was +> mistyped as "in", improve grammar in "considerable slow" -> +> "considerably slower". +> + +**[v1: net: selftests: reuseaddr_conflict: add missing new line at the end of the output](http://lore.kernel.org/netdev/20240329160559.249476-1-kuba@kernel.org/)** + +> The netdev CI runs in a VM and captures serial, so stdout and +> stderr get combined. Because there's a missing new line in +> stderr the test ends up corrupting KTAP: +> + +**[v1: net-next: tcp/dccp: do not care about families in inet_twsk_purge()](http://lore.kernel.org/netdev/20240329153203.345203-1-edumazet@google.com/)** + +> We lost ability to unload ipv6 module a long time ago. +> +> Instead of calling expensive inet_twsk_purge() twice, +> we can handle all families in one round. +> + +**[v1: net-next: inet: preserve const qualifier in inet_csk()](http://lore.kernel.org/netdev/20240329144931.295800-1-edumazet@google.com/)** + +> We can change inet_csk() to propagate its argument const qualifier, +> thanks to container_of_const(). +> + +**[v2: Documentation: networking: document ISO 15765-2:2016](http://lore.kernel.org/netdev/20240329133458.323041-2-valla.francesco@gmail.com/)** + +> While the in-kernel ISO 15765-2:2016 (ISO-TP) stack is fully functional and +> easy to use, no documentation exists for it. +> + +**[v1: 6.1: octeontx2-af: Add validation of lmac](http://lore.kernel.org/netdev/20240329114133.45456-1-amishin@t-argos.ru/)** + +> With the addition of new MAC blocks like CN10K RPM and CN10KB +> RPM_USX, LMACs are noncontiguous. Though in most of the functions, +> lmac validation checks exist but in few functions they are missing. +> The problem has been fixed by the following patch which can be +> cleanly applied to the 6.1.y branch. +> + +**[v3: iwl-next: Introduce ETH56G PHY model for E825C products](http://lore.kernel.org/netdev/20240329112339.29642-14-karol.kolacinski@intel.com/)** + +> E825C products have a different PHY model than E822, E823 and E810 products. +> This PHY is ETH56G and its support is necessary to have functional PTP stack +> for E825C products. +> + +**[v2: 6.1.y: net: tls: handle backlogging of crypto requests](http://lore.kernel.org/netdev/20240329102540.3888561-1-srish.srinivasan@broadcom.com/)** + +> commit 8590541473188741055d27b955db0777569438e3 upstream +> +> Since we're setting the CRYPTO_TFM_REQ_MAY_BACKLOG flag on our +> requests to the crypto API, crypto_aead_{encrypt,decrypt} can return +> -EBUSY instead of -EINPROGRESS in valid situations. +> + +**[v2: Documentation: Add reconnect process for VDUSE](http://lore.kernel.org/netdev/20240329093832.140690-1-lulu@redhat.com/)** + +> Add a document explaining the reconnect process, including what the +> Userspace App needs to do and how it works with the kernel. +> + +**[v1: net-next: ethtool: Max power support](http://lore.kernel.org/netdev/20240329092321.16843-1-wojciech.drewek@intel.com/)** + +> Some ethernet modules use nonstandard power levels [1]. Extend ethtool +> module implementation to support new attributes that will allow user +> to change maximum power. Rename structures and functions to be more +> generic. Introduce an example of the new API in ice driver. +> + +**[v1: wifi: mac80211: correctly document struct mesh_table](http://lore.kernel.org/netdev/20240328-mesh_table-kerneldoc-v1-1-174c4df341b1@quicinc.com/)** + +> Currently kernel-doc -Wall reports: +> +> net/mac80211/ieee80211_i.h:687: warning: missing initial short description on line: +> * struct mesh_table +> + +### 安全增强 + +**[v2: drm/radeon/radeon_display: Decrease the size of allocated memory](http://lore.kernel.org/linux-hardening/AS8PR02MB723799AFF24E7524364F66708B392@AS8PR02MB7237.eurprd02.prod.outlook.com/)** + +> This is an effort to get rid of all multiplications from allocation +> functions in order to prevent integer overflows [1] [2]. +> + +**[v3: scsi: csiostor: Use kcalloc() instead of kzalloc()](http://lore.kernel.org/linux-hardening/AS8PR02MB7237BA2BBAA646DFDB21C63B8B392@AS8PR02MB7237.eurprd02.prod.outlook.com/)** + +> Use 2-factor multiplication argument form kcalloc() instead +> of kzalloc(). +> + +**[v2: perf/x86/amd/uncore: Use kcalloc*() instead of kzalloc*()](http://lore.kernel.org/linux-hardening/AS8PR02MB7237A07D73D6D15EBF72FD8D8B392@AS8PR02MB7237.eurprd02.prod.outlook.com/)** + +> As noted in the "Deprecated Interfaces, Language Features, Attributes, +> and Conventions" documentation [1], size calculations (especially +> multiplication) should not be performed in memory allocator (or similar) +> function arguments due to the risk of them overflowing. This could lead +> to values wrapping around and a smaller allocation being made than the +> caller was expecting. Using those allocations could lead to linear +> overflows of heap memory and other misbehaviors. +> + +**[v2: dmaengine: pl08x: Use kcalloc() instead of kzalloc()](http://lore.kernel.org/linux-hardening/AS8PR02MB72373D9261B3B166048A8E218B392@AS8PR02MB7237.eurprd02.prod.outlook.com/)** + +> This is an effort to get rid of all multiplications from allocation +> functions in order to prevent integer overflows [1]. +> + +**[v1: perf/x86/intel/uncore: Prefer struct_size over open coded arithmetic](http://lore.kernel.org/linux-hardening/AS8PR02MB7237F4D39BF6AA0FF40E91638B392@AS8PR02MB7237.eurprd02.prod.outlook.com/)** + +> This is an effort to get rid of all multiplications from allocation +> functions in order to prevent integer overflows [1][2]. +> + +**[v1: nouveau/gsp: Avoid addressing beyond end of rpc->entries](http://lore.kernel.org/linux-hardening/20240330141159.work.063-kees@kernel.org/)** + +> Using the end of rpc->entries[] for addressing runs into both compile-time +> and run-time detection of accessing beyond the end of the array. Use the +> base pointer instead, since was allocated with the additional bytes for +> storing the strings. Avoids the following warning in future GCC releases +> with support for __counted_by: +> + +**[v1: Squashfs: replace deprecated strncpy with strscpy](http://lore.kernel.org/linux-hardening/20240328-strncpy-fs-squashfs-namei-c-v1-1-5c7bcbbeb675@google.com/)** + +> strncpy() is deprecated for use on NUL-terminated destination strings +> [1] and as such we should prefer more robust and less ambiguous string +> interfaces. +> + +**[v1: Add sy7802 flash led driver](http://lore.kernel.org/linux-hardening/20240327-sy7802-v1-0-db74ab32faaf@apitzsch.eu/)** + +> This series introduces a driver for the Silergy SY7802 charge pump used +> in the BQ Aquaris M5 and X5 smartphones. +> + +**[v1: vmcore: replace strncpy with strtomem](http://lore.kernel.org/linux-hardening/20240327-strncpy-fs-proc-vmcore-c-v1-1-e025ed08b1b0@google.com/)** + +> strncpy() is in the process of being replaced as it is deprecated in +> some situations [1]. While the specific use of strncpy that this patch +> targets is not exactly deprecated, the real mission is to rid the kernel +> of all its uses. +> + +**[v2: net-next: compiler_types: add Endianness-dependent __counted_by_{le,be}](http://lore.kernel.org/linux-hardening/20240327142241.1745989-1-aleksander.lobakin@intel.com/)** + +> Some structures contain flexible arrays at the end and the counter for +> them, but the counter has explicit Endianness and thus __counted_by() +> can't be used directly. +> + +**[v1: next: wifi: wil6210: Annotate struct wmi_set_link_monitor_cmd with __counted_by()](http://lore.kernel.org/linux-hardening/ZgODZOB4fOBvKl7R@neat/)** + +> Prepare for the coming implementation by GCC and Clang of the __counted_by +> attribute. Flexible array members annotated with __counted_by can have +> their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for +> array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family +> functions). +> + +**[v4: Handle faults in KUnit tests](http://lore.kernel.org/linux-hardening/20240326095118.126696-1-mic@digikod.net/)** + +> This patch series teaches KUnit to handle kthread faults as errors, and +> it brings a few related fixes and improvements. +> + +**[v1: next: firewire: Annotate struct fw_iso_packet with __counted_by()](http://lore.kernel.org/linux-hardening/ZgIrOuR3JI%2FjzqoH@neat/)** + +> Prepare for the coming implementation by GCC and Clang of the __counted_by +> attribute. Flexible array members annotated with __counted_by can have +> their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for +> array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family +> functions). +> + +**[v1: next: fs: Annotate struct file_handle with __counted_by() and use struct_size()](http://lore.kernel.org/linux-hardening/ZgImCXTdGDTeBvSS@neat/)** + +> Prepare for the coming implementation by GCC and Clang of the __counted_by +> attribute. Flexible array members annotated with __counted_by can have +> their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for +> array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family +> functions). +> + +**[v4: Add support for QoS configuration](http://lore.kernel.org/linux-hardening/20240325181628.9407-1-quic_okukatla@quicinc.com/)** + +> This series adds QoS support for QNOC type device which can be found on +> SC7280 platform. It adds support for programming priority, +> priority forward disable and urgency forwarding. This helps in +> priortizing the traffic originating from different interconnect masters +> at NOC(Network On Chip). +> + +### 异步 IO + +**[v1: liburing: io_uring.h: Sync kernel header to fetch enum names](http://lore.kernel.org/io-uring/20240329215718.25048-1-krisman@suse.de/)** + +> After a report by Ritesh (YoSTEALTH) on github, we named the enums in +> the io_uring uapi header. Sync the change into liburing. +> + +**[v1: io_uring: return void from io_put_kbuf_comp()](http://lore.kernel.org/io-uring/20240329155054.1936666-1-ming.lei@redhat.com/)** + +> The two callers don't handle the return value of io_put_kbuf_comp(), so +> change its return type into void. +> + +**[v1: io_uring: kill dead code in io_req_complete_post](http://lore.kernel.org/io-uring/20240329154712.1936153-1-ming.lei@redhat.com/)** + +> Since commit 8f6c829491fe ("io_uring: remove struct io_tw_state::locked"), +> io_req_complete_post() is only called from io-wq submit work, where the +> request reference is guaranteed to be grabbed and won't drop to zero +> in io_req_complete_post(). +> + +**[v2: fs: claw back a few FMODE_* bits](http://lore.kernel.org/io-uring/20240328-gewendet-spargel-aa60a030ef74@brauner/)** + +> There's a bunch of flags that are purely based on what the file +> operations support while also never being conditionally set or unset. +> IOW, they're not subject to change for individual files. Imho, such +> flags don't need to live in f_mode they might as well live in the fops +> structs itself. +> + +**[v1: liburing: io_uring.h: Avoid anonymous enums](http://lore.kernel.org/io-uring/20240328001653.31124-1-krisman@suse.de/)** + +> anonymous enums, while valid, confuses Cython (Python to C translator), +> as reported by Ritesh (YoSTEALTH) . Since people are using this, just +> name the existing enums. +> + +**[v1: : fs: claw back a few FMODE_* bits](http://lore.kernel.org/io-uring/20240327-begibt-wacht-b9b9f4d1145a@brauner/)** + +> There's a bunch of flags that are purely based on what the file +> operations support while also never being conditionally set or unset. +> IOW, they're not subject to change for individual file opens. Imho, such +> flags don't need to live in f_mode they might as well live in the fops +> structs itself. +> + +**[v1: io_uring: refill request cache in memory order](http://lore.kernel.org/io-uring/eb71eb39-abaf-4ba5-8a71-a112bd5de377@kernel.dk/)** + +> The allocator will generally return memory in order, but +> __io_alloc_req_refill() then adds them to a stack and we'll extract them +> in the opposite order. This obviously isn't a huge deal +> + +### BPF + +**[v4: perf/x86/amd: add LBR capture support outside of hardware events](http://lore.kernel.org/bpf/20240331041830.2806741-1-andrii@kernel.org/)** + +> Add AMD-specific implementation of perf_snapshot_branch_stack static call that +> allows LBR capture from arbitrary points in the kernel. This is utilized by +> BPF programs. See patch #3 for all the details. +> + +**[v5: bpf-next: bpf: Add a generic bits iterator](http://lore.kernel.org/bpf/20240331034154.16284-1-laoar.shao@gmail.com/)** + +> Three new kfuncs, namely bpf_iter_bits_{new,next,destroy}, have been +> added for the new bpf_iter_bits functionality. These kfuncs enable the +> iteration of the bits from a given address and a given number of bits. +> + +**[v2: bpf-next: selftests/bpf: make multi-uprobe tests work in RELEASE=1 mode](http://lore.kernel.org/bpf/20240329190410.4191353-1-andrii@kernel.org/)** + +> When BPF selftests are built in RELEASE=1 mode with -O2 optimization +> level, uprobe_multi binary, called from multi-uprobe tests is optimized +> to the point that all the thousands of target uprobe_multi_func_XXX +> functions are eliminated, breaking tests. +> + +**[v1: bpf-next: Add internal-only BPF per-CPU instructions](http://lore.kernel.org/bpf/20240329184740.4084786-1-andrii@kernel.org/)** + +> Add two new BPF instructions for dealing with per-CPU memory. +> +> One, BPF_LDX | BPF_ADDR_PERCPU | BPF_DW (where BPF_ADD_PERCPU is unused +> 0xe0 opcode), resolved provided per-CPU address (offset) to an absolute +> address where per-CPU data resides for "this" CPU. This is the most universal, +> and, strictly speaking, the only per-CPU BPF instruction necessary. +> + +**[v1: bpf-next: bpf: Avoid kfree_rcu() under lock in bpf_lpm_trie.](http://lore.kernel.org/bpf/20240329171439.37813-1-alexei.starovoitov@gmail.com/)** + +> bpf_lpm lock can be the same. +> timer_base lock can also be the same due to timer migration. +> but rcu krcp lock is always per-cpu, so it cannot be the same lock. +> Hence it's a false positive. +> To avoid lockdep complain move kfree_rcu() after spin_unlock. +> + +**[v1: kbuild: Avoid weak external linkage where possible](http://lore.kernel.org/bpf/20240329093356.276289-5-ardb+git@google.com/)** + +> Weak external linkage is intended for cases where a symbol reference +> can remain unsatisfied in the final link. Taking the address of such a +> symbol should yield NULL if the reference was not satisfied. +> + +**[v1: bpf-next: bpf: add a verbose message if map limit is reached](http://lore.kernel.org/bpf/20240329072050.68289-1-aspsk@isovalent.com/)** + +> When more than 64 maps are used by a program the verifier return -E2BIG. +> Add a verbose message which highlights the error and also prints the +> actual limit. +> + +**[v1: bpf-next: net: netfilter: Make ct zone id configurable for bpf ct helper functions](http://lore.kernel.org/bpf/20240329041430.2176860-1-brad@faucet.nz/)** + +> Add ct zone id to bpf_ct_opts so that arbitrary ct zone can be +> set for xdp/tc bpf ct helper functions bpf_{xdp,skb}_ct_alloc +> and bpf_{xdp,skb}_ct_lookup. +> + +**[v1: bpf-next: bpf: Mark bpf prog stack with kmsan_unposion_memory in interpreter mode](http://lore.kernel.org/bpf/20240328185801.1843078-1-martin.lau@linux.dev/)** + +> syzbot reported uninit memory usages during map_{lookup,delete}_elem. +> +> This should address different syzbot reports on the uninit "void *key" +> argument during map_{lookup,delete}_elem. +> + +**[v1: First try to replace page_frag with page_frag_cache](http://lore.kernel.org/bpf/20240328133839.13620-1-linyunsheng@huawei.com/)** + +> This patchset tries to unfiy the page frag implementation +> by replacing page_frag with page_frag_cache for sk_page_frag() +> first. And will try to replace the rest of page_frag in the +> follow patchset. +> + +**[v2: perf/x86/amd: support capturing LBR from software events](http://lore.kernel.org/bpf/20240328133359.731818-1-andrii@kernel.org/)** + +> [0] added ability to capture LBR (Last Branch Records) on Intel CPUs +> from inside BPF program at pretty much any arbitrary point. This is +> extremely useful capability that allows to figure out otherwise +> hard-to-debug problems, because LBR is now available based on some +> application-defined conditions, not just hardware-supported events. +> + +**[v1: bpf-next: selftests/bpf: Handle EAGAIN in bpf_tcp_ca](http://lore.kernel.org/bpf/78a17486bd12d7afb4cce565d58dac3f15e55c49.1711620349.git.tanggeliang@kylinos.cn/)** + +> bpf_tcp_ca tests may emit EAGAIN sometimes. In that case, tests fail with +> "bytes != total_bytes" errors. Sending should continue, not break when +> errno is EAGAIN. This patch can make bpf_tcp_ca tests stable. +> + +**[v5: net-next: Add minimal XDP support to TI AM65 CPSW Ethernet driver](http://lore.kernel.org/bpf/20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com/)** + +> This patch adds XDP support to TI AM65 CPSW Ethernet driver. +> + +**[v1: bpf-next: bpf: freeze a task cgroup from bpf](http://lore.kernel.org/bpf/20240327225334.58474-1-tixxdz@gmail.com/)** + +> This patch series adds support to freeze the task cgroup hierarchy +> that is on a default cgroup v2 without going through kernfs interface. +> +> For some cases we want to freeze the cgroup of a task based on some +> signals, doing so from bpf is better than user space which could be +> too late. +> + +**[v1: bpf-next: uprobe: uretprobe speed up](http://lore.kernel.org/bpf/20240327102036.543283-1-jolsa@kernel.org/)** + +> hi, +> as part of the effort on speeding up the uprobes [0] coming with +> return uprobe optimization by using syscall instead of the trap +> on the uretprobe trampoline. +> + +**[v1: bpf,arena: Use helper sizeof_field in struct accessors](http://lore.kernel.org/bpf/20240327065334.8140-1-haiyue.wang@intel.com/)** + +> Use the well defined helper sizeof_field() to calculate the size of a +> struct member, instead of doing custom calculations. +> + +**[v7: net-next: Device Memory TCP](http://lore.kernel.org/bpf/20240326225048.785801-1-almasrymina@google.com/)** + +> This revision largely rebases on top of net-next and addresses the feedback +> RFCv6 received from folks, namely Jakub, Yunsheng, Arnd, David, & Pavel. +> + +**[v1: bpf-next: bpf: support deferring bpf_link dealloc to after RCU grace period](http://lore.kernel.org/bpf/20240326211427.1156080-1-andrii@kernel.org/)** + +> BPF link for some program types is passed as a "context" which can be +> used by those BPF programs to look up additional information. E.g., for +> BPF raw tracepoints, link is used to fetch BPF cookie value, similarly +> for BPF multi-kprobes and multi-uprobes. +> + +**[v2: bpf-next: bench: fast in-kernel triggering benchmarks](http://lore.kernel.org/bpf/20240326162151.3981687-1-andrii@kernel.org/)** + +> Remove "legacy" triggering benchmarks which rely on syscalls (and thus syscall +> overhead is a noticeable part of benchmark, unfortunately). Replace them with +> faster versions that rely on triggering BPF programs in-kernel through another +> simple "driver" BPF program. See patch #2 with comparison results. +> + +**[v2: bpf-next: selftests/bpf: Enable cross platform testing for local vmtest](http://lore.kernel.org/bpf/20240326155736.3480081-1-pulehui@huaweicloud.com/)** + +> The variable $ARCH in the current script is platform semantics, not +> kernel semantics. Rename it to $PLATFORM so that we can easily use $ARCH +> in cross-compilation. For now, Using PLATFORM= and CROSS_COMPILE= +> options will enable cross platform testing: +> + +**[v2: bpf-next: BPF: support mark in bpf_fib_lookup](http://lore.kernel.org/bpf/20240326101742.17421-1-aspsk@isovalent.com/)** + +> This patch series adds policy routing support in bpf_fib_lookup. +> This is a useful functionality which was missing for a long time, +> as without it some networking setups can't be implemented in BPF. +> One example can be found here [1]. +> + +**[v1: bpf-next: bpf: Mitigate latency spikes associated with freeing non-preallocated htab](http://lore.kernel.org/bpf/20240326081207.73375-1-laoar.shao@gmail.com/)** + +> Following the recent upgrade of one of our BPF programs, we encountered +> significant latency spikes affecting other applications running on the same +> host. After thorough investigation, we identified that these spikes were +> primarily caused by the prolonged duration required to free a +> non-preallocated htab with approximately 2 million keys. +> + +**[v2: leds: trigger: legtrig-bpf: Add ledtrig-bpf trigger](http://lore.kernel.org/bpf/cover.1711415233.git.hodges.daniel.scott@gmail.com/)** + +> This patch set adds a led trigger that interfaces with the bpf +> subsystem. It allows for BPF programs to control LED activity using bpf +> kfuncs. This functionality is useful in giving users a physical +> indication that a BPF program has performed an operation such as +> handling a packet or probe point. +> + +## 周边技术动态 + +### Qemu + +**[v2: riscv: thead: Add th.sxstatus CSR emulation](http://lore.kernel.org/qemu-devel/20240329120427.684677-1-christoph.muellner@vrull.eu/)** + +> The th.sxstatus CSR can be used to identify available custom extension +> on T-Head CPUs. The CSR is documented here: +> https://github.com/T-head-Semi/thead-extension-spec/pull/46 +> + +**[v3: target/riscv: Support Zve32x and Zve64x extensions](http://lore.kernel.org/qemu-devel/20240328022343.6871-1-jason.chien@sifive.com/)** + +> This patch series adds the support for Zve32x and Zvx64x and makes vector +> registers visible in GDB if any of the V/Zve*/Zvk* extensions is enabled. +> + +**[v3: target/riscv/kvm/kvm-cpu.c: kvm_riscv_handle_sbi() fail with vendor-specific SBI](http://lore.kernel.org/qemu-devel/20240327125732.11739-1-alexei.filippov@syntacore.com/)** + +> kvm_riscv_handle_sbi() may return not supported return code to not trigger +> qemu abort with vendor-specific sbi. +> + +**[v1: riscv: thead: Add th.mxstatus CSR emulation](http://lore.kernel.org/qemu-devel/20240327100034.3636610-1-christoph.muellner@vrull.eu/)** + +> The th.mxstatus CSR can be used to identify available custom extension +> on T-Head CPUs. The CSR is documented here: +> https://github.com/T-head-Semi/thead-extension-spec/pull/45 +> + +### Buildroot + +**[v1: package/uclibc: bump to 1.0.47](http://lore.kernel.org/buildroot/ZgQcVayV4juxXnnZ@waldemar-brodkorb.de/)** + +> Fixes riscv port. NPTL/TLS fixed. C++ applications now working. +> Added explicit_bzero and reallocarray. +> + +### U-Boot + +**[v2: riscv: Move virtio scan to board_late_init()](http://lore.kernel.org/u-boot/20240328095824.1179072-1-l.stelmach@samsung.com/)** + +> When virtio_init() gets called from board_init() PCI isn't ready. Thus, +> virtio-over-PCI (e.g. network interfaces) devices can't be detected and +> used without additional `virtio scan` scan in the shell or a script. +> + +**[v1: riscv: adds T-Head C9xx basic and GMAC support.](http://lore.kernel.org/u-boot/20240327080817.44501-1-wefu@redhat.com/)** + +> This patchset adds T-Head C9xx basic support in arch/riscv/, +> updates TH1520 Soc/Lichee Pi4A dts files for GMAC support. +> Also enable designware ethernet & realtek phy in default configs, +> and some boot env option for booting linux from Ethernet. +> + +**[mx6cuboxi: failes to detect model](http://lore.kernel.org/u-boot/CAH9NwWe1qEo5r0UmdRh9CqFxmJYfHRU1YxoW8b8vTQwDB-227A@mail.gmail.com/)** + +> I am seeing model detection problems with the current git master. +> +> U-Boot 2024.04-rc5 (Mar 26 2024 - 15:59:22 +0100) +> + +**[GIT PULL: u-boot-riscv/master](http://lore.kernel.org/u-boot/ZgLME470PHld19If@swlinux02/)** + +> The following changes since commit dde373bde392c38649c8c4420e0c98ef8d38d9dc: +> +> Prepare v2024.04-rc5 (2024-03-25 21:56:50 -0400) +> +> are available in the Git repository at: +> +> https://source.denx.de/u-boot/custodians/u-boot-riscv.git +> + +**[回复: v2: riscv: add support for Milk-V Mars board](http://lore.kernel.org/u-boot/BJXPR01MB085553074644A86617F2485CE636A@BJXPR01MB0855.CHNPR01.prod.partner.outlook.cn/)** + +> > With this patch series the VisionFive 2 U-Boot SPL will detect that it is running on +> > a Milk-V board and patch the device-tree accordingly. +> > This is the same approach that has been taken to handle the differences +> > between the Visionfive 2 1.2B and 1.3A revisions. +> > + +