-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
riscv: s64ilp32: Support k230 clint & plic which's phy_addr beyond 4GB #4
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The k230 CLINT's phy_addr is 0xf04000000, and PLIC's phy_addr is 0xf00000000, and all are beyond 4GB out of the 32-bit range. MMU_SV39 in s64ilp32 could support the whole bits of PPN, which is beyond 32 bits. So, enable PHYS_ADDR_T_64BIT to use this hardware feature. This patch only supports io_remap higher phy_addr, but the phy_addr of RAM must still kept in 4GB. Signed-off-by: Guo Ren <[email protected]> Signed-off-by: Guo Ren <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
…s in tail_call This patch solves the 10 tail_call testing issues in test_bpf. At this point, all tests of test_bpf in BPF_JIT mode have passed. Here is the comparison between s64ilp32, s64lp64 and s32ilp32: - s64lp64 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 188 PASS test_bpf: #1 Tail call 2 jited:1 180 PASS test_bpf: #2 Tail call 3 jited:1 203 PASS test_bpf: #3 Tail call 4 jited:1 225 PASS test_bpf: #4 Tail call load/store leaf jited:1 145 PASS test_bpf: #5 Tail call load/store jited:1 195 PASS test_bpf: #6 Tail call error path, max count reached jited:1 997 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 155563 PASS test_bpf: #8 Tail call error path, NULL target jited:1 164 PASS test_bpf: #9 Tail call error path, index out of range jited:1 136 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s64ilp32 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 160 PASS test_bpf: #1 Tail call 2 jited:1 221 PASS test_bpf: #2 Tail call 3 jited:1 251 PASS test_bpf: #3 Tail call 4 jited:1 275 PASS test_bpf: #4 Tail call load/store leaf jited:1 198 PASS test_bpf: #5 Tail call load/store jited:1 262 PASS test_bpf: #6 Tail call error path, max count reached jited:1 1390 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 204492 PASS test_bpf: #8 Tail call error path, NULL target jited:1 199 PASS test_bpf: #9 Tail call error path, index out of range jited:1 168 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s32ilp32 ``` ... test_bpf: Summary: 1027 PASSED, 0 FAILED, [832/1015 JIT'ed] test_bpf: #0 Tail call leaf jited:1 266 PASS test_bpf: #1 Tail call 2 jited:1 409 PASS test_bpf: #2 Tail call 3 jited:1 481 PASS test_bpf: #3 Tail call 4 jited:1 537 PASS test_bpf: #4 Tail call load/store leaf jited:1 325 PASS test_bpf: #5 Tail call load/store jited:1 427 PASS test_bpf: #6 Tail call error path, max count reached jited:1 3050 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 255522 PASS test_bpf: #8 Tail call error path, NULL target jited:1 315 PASS test_bpf: #9 Tail call error path, index out of range jited:1 280 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` Actually, s64ilp32 and s64lp64 perform consistently, both in terms of the number that can be executed by JIT and execution time. while, only 80% of cases in s32ilp32 can be executed by JIT, and the execution time is also longer under the same JIT execution situation. Signed-off-by: Chen Pei <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
commit 9d274c1 upstream. We have been seeing crashes on duplicate keys in btrfs_set_item_key_safe(): BTRFS critical (device vdb): slot 4 key (450 108 8192) new key (450 108 8192) ------------[ cut here ]------------ kernel BUG at fs/btrfs/ctree.c:2620! invalid opcode: 0000 [#1] PREEMPT SMP PTI CPU: 0 PID: 3139 Comm: xfs_io Kdump: loaded Not tainted 6.9.0 #6 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-2.fc40 04/01/2014 RIP: 0010:btrfs_set_item_key_safe+0x11f/0x290 [btrfs] With the following stack trace: #0 btrfs_set_item_key_safe (fs/btrfs/ctree.c:2620:4) #1 btrfs_drop_extents (fs/btrfs/file.c:411:4) #2 log_one_extent (fs/btrfs/tree-log.c:4732:9) #3 btrfs_log_changed_extents (fs/btrfs/tree-log.c:4955:9) #4 btrfs_log_inode (fs/btrfs/tree-log.c:6626:9) #5 btrfs_log_inode_parent (fs/btrfs/tree-log.c:7070:8) #6 btrfs_log_dentry_safe (fs/btrfs/tree-log.c:7171:8) #7 btrfs_sync_file (fs/btrfs/file.c:1933:8) #8 vfs_fsync_range (fs/sync.c:188:9) #9 vfs_fsync (fs/sync.c:202:9) #10 do_fsync (fs/sync.c:212:9) #11 __do_sys_fdatasync (fs/sync.c:225:9) #12 __se_sys_fdatasync (fs/sync.c:223:1) #13 __x64_sys_fdatasync (fs/sync.c:223:1) #14 do_syscall_x64 (arch/x86/entry/common.c:52:14) #15 do_syscall_64 (arch/x86/entry/common.c:83:7) #16 entry_SYSCALL_64+0xaf/0x14c (arch/x86/entry/entry_64.S:121) So we're logging a changed extent from fsync, which is splitting an extent in the log tree. But this split part already exists in the tree, triggering the BUG(). This is the state of the log tree at the time of the crash, dumped with drgn (https://github.com/osandov/drgn/blob/main/contrib/btrfs_tree.py) to get more details than btrfs_print_leaf() gives us: >>> print_extent_buffer(prog.crashed_thread().stack_trace()[0]["eb"]) leaf 33439744 level 0 items 72 generation 9 owner 18446744073709551610 leaf 33439744 flags 0x100000000000000 fs uuid e5bd3946-400c-4223-8923-190ef1f18677 chunk uuid d58cb17e-6d02-494a-829a-18b7d8a399da item 0 key (450 INODE_ITEM 0) itemoff 16123 itemsize 160 generation 7 transid 9 size 8192 nbytes 8473563889606862198 block group 0 mode 100600 links 1 uid 0 gid 0 rdev 0 sequence 204 flags 0x10(PREALLOC) atime 1716417703.220000000 (2024-05-22 15:41:43) ctime 1716417704.983333333 (2024-05-22 15:41:44) mtime 1716417704.983333333 (2024-05-22 15:41:44) otime 17592186044416.000000000 (559444-03-08 01:40:16) item 1 key (450 INODE_REF 256) itemoff 16110 itemsize 13 index 195 namelen 3 name: 193 item 2 key (450 XATTR_ITEM 1640047104) itemoff 16073 itemsize 37 location key (0 UNKNOWN.0 0) type XATTR transid 7 data_len 1 name_len 6 name: user.a data a item 3 key (450 EXTENT_DATA 0) itemoff 16020 itemsize 53 generation 9 type 1 (regular) extent data disk byte 303144960 nr 12288 extent data offset 0 nr 4096 ram 12288 extent compression 0 (none) item 4 key (450 EXTENT_DATA 4096) itemoff 15967 itemsize 53 generation 9 type 2 (prealloc) prealloc data disk byte 303144960 nr 12288 prealloc data offset 4096 nr 8192 item 5 key (450 EXTENT_DATA 8192) itemoff 15914 itemsize 53 generation 9 type 2 (prealloc) prealloc data disk byte 303144960 nr 12288 prealloc data offset 8192 nr 4096 ... So the real problem happened earlier: notice that items 4 (4k-12k) and 5 (8k-12k) overlap. Both are prealloc extents. Item 4 straddles i_size and item 5 starts at i_size. Here is the state of the filesystem tree at the time of the crash: >>> root = prog.crashed_thread().stack_trace()[2]["inode"].root >>> ret, nodes, slots = btrfs_search_slot(root, BtrfsKey(450, 0, 0)) >>> print_extent_buffer(nodes[0]) leaf 30425088 level 0 items 184 generation 9 owner 5 leaf 30425088 flags 0x100000000000000 fs uuid e5bd3946-400c-4223-8923-190ef1f18677 chunk uuid d58cb17e-6d02-494a-829a-18b7d8a399da ... item 179 key (450 INODE_ITEM 0) itemoff 4907 itemsize 160 generation 7 transid 7 size 4096 nbytes 12288 block group 0 mode 100600 links 1 uid 0 gid 0 rdev 0 sequence 6 flags 0x10(PREALLOC) atime 1716417703.220000000 (2024-05-22 15:41:43) ctime 1716417703.220000000 (2024-05-22 15:41:43) mtime 1716417703.220000000 (2024-05-22 15:41:43) otime 1716417703.220000000 (2024-05-22 15:41:43) item 180 key (450 INODE_REF 256) itemoff 4894 itemsize 13 index 195 namelen 3 name: 193 item 181 key (450 XATTR_ITEM 1640047104) itemoff 4857 itemsize 37 location key (0 UNKNOWN.0 0) type XATTR transid 7 data_len 1 name_len 6 name: user.a data a item 182 key (450 EXTENT_DATA 0) itemoff 4804 itemsize 53 generation 9 type 1 (regular) extent data disk byte 303144960 nr 12288 extent data offset 0 nr 8192 ram 12288 extent compression 0 (none) item 183 key (450 EXTENT_DATA 8192) itemoff 4751 itemsize 53 generation 9 type 2 (prealloc) prealloc data disk byte 303144960 nr 12288 prealloc data offset 8192 nr 4096 Item 5 in the log tree corresponds to item 183 in the filesystem tree, but nothing matches item 4. Furthermore, item 183 is the last item in the leaf. btrfs_log_prealloc_extents() is responsible for logging prealloc extents beyond i_size. It first truncates any previously logged prealloc extents that start beyond i_size. Then, it walks the filesystem tree and copies the prealloc extent items to the log tree. If it hits the end of a leaf, then it calls btrfs_next_leaf(), which unlocks the tree and does another search. However, while the filesystem tree is unlocked, an ordered extent completion may modify the tree. In particular, it may insert an extent item that overlaps with an extent item that was already copied to the log tree. This may manifest in several ways depending on the exact scenario, including an EEXIST error that is silently translated to a full sync, overlapping items in the log tree, or this crash. This particular crash is triggered by the following sequence of events: - Initially, the file has i_size=4k, a regular extent from 0-4k, and a prealloc extent beyond i_size from 4k-12k. The prealloc extent item is the last item in its B-tree leaf. - The file is fsync'd, which copies its inode item and both extent items to the log tree. - An xattr is set on the file, which sets the BTRFS_INODE_COPY_EVERYTHING flag. - The range 4k-8k in the file is written using direct I/O. i_size is extended to 8k, but the ordered extent is still in flight. - The file is fsync'd. Since BTRFS_INODE_COPY_EVERYTHING is set, this calls copy_inode_items_to_log(), which calls btrfs_log_prealloc_extents(). - btrfs_log_prealloc_extents() finds the 4k-12k prealloc extent in the filesystem tree. Since it starts before i_size, it skips it. Since it is the last item in its B-tree leaf, it calls btrfs_next_leaf(). - btrfs_next_leaf() unlocks the path. - The ordered extent completion runs, which converts the 4k-8k part of the prealloc extent to written and inserts the remaining prealloc part from 8k-12k. - btrfs_next_leaf() does a search and finds the new prealloc extent 8k-12k. - btrfs_log_prealloc_extents() copies the 8k-12k prealloc extent into the log tree. Note that it overlaps with the 4k-12k prealloc extent that was copied to the log tree by the first fsync. - fsync calls btrfs_log_changed_extents(), which tries to log the 4k-8k extent that was written. - This tries to drop the range 4k-8k in the log tree, which requires adjusting the start of the 4k-12k prealloc extent in the log tree to 8k. - btrfs_set_item_key_safe() sees that there is already an extent starting at 8k in the log tree and calls BUG(). Fix this by detecting when we're about to insert an overlapping file extent item in the log tree and truncating the part that would overlap. CC: [email protected] # 6.1+ Reviewed-by: Filipe Manana <[email protected]> Signed-off-by: Omar Sandoval <[email protected]> Signed-off-by: David Sterba <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
commit 22f0081 upstream. The syzbot fuzzer found that the interrupt-URB completion callback in the cdc-wdm driver was taking too long, and the driver's immediate resubmission of interrupt URBs with -EPROTO status combined with the dummy-hcd emulation to cause a CPU lockup: cdc_wdm 1-1:1.0: nonzero urb status received: -71 cdc_wdm 1-1:1.0: wdm_int_callback - 0 bytes watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [syz-executor782:6625] CPU#0 Utilization every 4s during lockup: #1: 98% system, 0% softirq, 3% hardirq, 0% idle #2: 98% system, 0% softirq, 3% hardirq, 0% idle #3: 98% system, 0% softirq, 3% hardirq, 0% idle #4: 98% system, 0% softirq, 3% hardirq, 0% idle #5: 98% system, 1% softirq, 3% hardirq, 0% idle Modules linked in: irq event stamp: 73096 hardirqs last enabled at (73095): [<ffff80008037bc00>] console_emit_next_record kernel/printk/printk.c:2935 [inline] hardirqs last enabled at (73095): [<ffff80008037bc00>] console_flush_all+0x650/0xb74 kernel/printk/printk.c:2994 hardirqs last disabled at (73096): [<ffff80008af10b00>] __el1_irq arch/arm64/kernel/entry-common.c:533 [inline] hardirqs last disabled at (73096): [<ffff80008af10b00>] el1_interrupt+0x24/0x68 arch/arm64/kernel/entry-common.c:551 softirqs last enabled at (73048): [<ffff8000801ea530>] softirq_handle_end kernel/softirq.c:400 [inline] softirqs last enabled at (73048): [<ffff8000801ea530>] handle_softirqs+0xa60/0xc34 kernel/softirq.c:582 softirqs last disabled at (73043): [<ffff800080020de8>] __do_softirq+0x14/0x20 kernel/softirq.c:588 CPU: 0 PID: 6625 Comm: syz-executor782 Tainted: G W 6.10.0-rc2-syzkaller-g8867bbd4a056 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Testing showed that the problem did not occur if the two error messages -- the first two lines above -- were removed; apparently adding material to the kernel log takes a surprisingly large amount of time. In any case, the best approach for preventing these lockups and to avoid spamming the log with thousands of error messages per second is to ratelimit the two dev_err() calls. Therefore we replace them with dev_err_ratelimited(). Signed-off-by: Alan Stern <[email protected]> Suggested-by: Greg KH <[email protected]> Reported-and-tested-by: [email protected] Closes: https://lore.kernel.org/linux-usb/[email protected]/ Reported-and-tested-by: [email protected] Closes: https://lore.kernel.org/linux-usb/[email protected]/ Fixes: 9908a32 ("USB: remove err() macro from usb class drivers") Link: https://lore.kernel.org/linux-usb/[email protected]/ Cc: [email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
[ Upstream commit f1e197a ] trace_drop_common() is called with preemption disabled, and it acquires a spin_lock. This is problematic for RT kernels because spin_locks are sleeping locks in this configuration, which causes the following splat: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 449, name: rcuc/47 preempt_count: 1, expected: 0 RCU nest depth: 2, expected: 2 5 locks held by rcuc/47/449: #0: ff1100086ec30a60 ((softirq_ctrl.lock)){+.+.}-{2:2}, at: __local_bh_disable_ip+0x105/0x210 #1: ffffffffb394a280 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock+0xbf/0x130 #2: ffffffffb394a280 (rcu_read_lock){....}-{1:2}, at: __local_bh_disable_ip+0x11c/0x210 #3: ffffffffb394a160 (rcu_callback){....}-{0:0}, at: rcu_do_batch+0x360/0xc70 #4: ff1100086ee07520 (&data->lock){+.+.}-{2:2}, at: trace_drop_common.constprop.0+0xb5/0x290 irq event stamp: 139909 hardirqs last enabled at (139908): [<ffffffffb1df2b33>] _raw_spin_unlock_irqrestore+0x63/0x80 hardirqs last disabled at (139909): [<ffffffffb19bd03d>] trace_drop_common.constprop.0+0x26d/0x290 softirqs last enabled at (139892): [<ffffffffb07a1083>] __local_bh_enable_ip+0x103/0x170 softirqs last disabled at (139898): [<ffffffffb0909b33>] rcu_cpu_kthread+0x93/0x1f0 Preemption disabled at: [<ffffffffb1de786b>] rt_mutex_slowunlock+0xab/0x2e0 CPU: 47 PID: 449 Comm: rcuc/47 Not tainted 6.9.0-rc2-rt1+ #7 Hardware name: Dell Inc. PowerEdge R650/0Y2G81, BIOS 1.6.5 04/15/2022 Call Trace: <TASK> dump_stack_lvl+0x8c/0xd0 dump_stack+0x14/0x20 __might_resched+0x21e/0x2f0 rt_spin_lock+0x5e/0x130 ? trace_drop_common.constprop.0+0xb5/0x290 ? skb_queue_purge_reason.part.0+0x1bf/0x230 trace_drop_common.constprop.0+0xb5/0x290 ? preempt_count_sub+0x1c/0xd0 ? _raw_spin_unlock_irqrestore+0x4a/0x80 ? __pfx_trace_drop_common.constprop.0+0x10/0x10 ? rt_mutex_slowunlock+0x26a/0x2e0 ? skb_queue_purge_reason.part.0+0x1bf/0x230 ? __pfx_rt_mutex_slowunlock+0x10/0x10 ? skb_queue_purge_reason.part.0+0x1bf/0x230 trace_kfree_skb_hit+0x15/0x20 trace_kfree_skb+0xe9/0x150 kfree_skb_reason+0x7b/0x110 skb_queue_purge_reason.part.0+0x1bf/0x230 ? __pfx_skb_queue_purge_reason.part.0+0x10/0x10 ? mark_lock.part.0+0x8a/0x520 ... trace_drop_common() also disables interrupts, but this is a minor issue because we could easily replace it with a local_lock. Replace the spin_lock with raw_spin_lock to avoid sleeping in atomic context. Signed-off-by: Wander Lairson Costa <[email protected]> Reported-by: Hu Chunyu <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
[ Upstream commit af0cb3f ] Xiumei and Christoph reported the following lockdep splat, complaining of the qdisc root lock being taken twice: ============================================ WARNING: possible recursive locking detected 6.7.0-rc3+ #598 Not tainted -------------------------------------------- swapper/2/0 is trying to acquire lock: ffff888177190110 (&sch->q.lock){+.-.}-{2:2}, at: __dev_queue_xmit+0x1560/0x2e70 but task is already holding lock: ffff88811995a110 (&sch->q.lock){+.-.}-{2:2}, at: __dev_queue_xmit+0x1560/0x2e70 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&sch->q.lock); lock(&sch->q.lock); *** DEADLOCK *** May be due to missing lock nesting notation 5 locks held by swapper/2/0: #0: ffff888135a09d98 ((&in_dev->mr_ifc_timer)){+.-.}-{0:0}, at: call_timer_fn+0x11a/0x510 #1: ffffffffaaee5260 (rcu_read_lock){....}-{1:2}, at: ip_finish_output2+0x2c0/0x1ed0 #2: ffffffffaaee5200 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x209/0x2e70 #3: ffff88811995a110 (&sch->q.lock){+.-.}-{2:2}, at: __dev_queue_xmit+0x1560/0x2e70 #4: ffffffffaaee5200 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x209/0x2e70 stack backtrace: CPU: 2 PID: 0 Comm: swapper/2 Not tainted 6.7.0-rc3+ #598 Hardware name: Red Hat KVM, BIOS 1.13.0-2.module+el8.3.0+7353+9de0a3cc 04/01/2014 Call Trace: <IRQ> dump_stack_lvl+0x4a/0x80 __lock_acquire+0xfdd/0x3150 lock_acquire+0x1ca/0x540 _raw_spin_lock+0x34/0x80 __dev_queue_xmit+0x1560/0x2e70 tcf_mirred_act+0x82e/0x1260 [act_mirred] tcf_action_exec+0x161/0x480 tcf_classify+0x689/0x1170 prio_enqueue+0x316/0x660 [sch_prio] dev_qdisc_enqueue+0x46/0x220 __dev_queue_xmit+0x1615/0x2e70 ip_finish_output2+0x1218/0x1ed0 __ip_finish_output+0x8b3/0x1350 ip_output+0x163/0x4e0 igmp_ifc_timer_expire+0x44b/0x930 call_timer_fn+0x1a2/0x510 run_timer_softirq+0x54d/0x11a0 __do_softirq+0x1b3/0x88f irq_exit_rcu+0x18f/0x1e0 sysvec_apic_timer_interrupt+0x6f/0x90 </IRQ> This happens when TC does a mirred egress redirect from the root qdisc of device A to the root qdisc of device B. As long as these two locks aren't protecting the same qdisc, they can be acquired in chain: add a per-qdisc lockdep key to silence false warnings. This dynamic key should safely replace the static key we have in sch_htb: it was added to allow enqueueing to the device "direct qdisc" while still holding the qdisc root lock. v2: don't use static keys anymore in HTB direct qdiscs (thanks Eric Dumazet) CC: Maxim Mikityanskiy <[email protected]> CC: Xiumei Mu <[email protected]> Reported-by: Christoph Paasch <[email protected]> Closes: multipath-tcp/mptcp_net-next#451 Signed-off-by: Davide Caratti <[email protected]> Link: https://lore.kernel.org/r/7dc06d6158f72053cf877a82e2a7a5bd23692faa.1713448007.git.dcaratti@redhat.com Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
[ Upstream commit cebae29 ] Shin'ichiro reported that when he's running fstests' test-case btrfs/167 on emulated zoned devices, he's seeing the following NULL pointer dereference in 'btrfs_zone_finish_endio()': Oops: general protection fault, probably for non-canonical address 0xdffffc0000000011: 0000 [#1] PREEMPT SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000088-0x000000000000008f] CPU: 4 PID: 2332440 Comm: kworker/u80:15 Tainted: G W 6.10.0-rc2-kts+ #4 Hardware name: Supermicro Super Server/X11SPi-TF, BIOS 3.3 02/21/2020 Workqueue: btrfs-endio-write btrfs_work_helper [btrfs] RIP: 0010:btrfs_zone_finish_endio.part.0+0x34/0x160 [btrfs] RSP: 0018:ffff88867f107a90 EFLAGS: 00010206 RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffffff893e5534 RDX: 0000000000000011 RSI: 0000000000000004 RDI: 0000000000000088 RBP: 0000000000000002 R08: 0000000000000001 R09: ffffed1081696028 R10: ffff88840b4b0143 R11: ffff88834dfff600 R12: ffff88840b4b0000 R13: 0000000000020000 R14: 0000000000000000 R15: ffff888530ad5210 FS: 0000000000000000(0000) GS:ffff888e3f800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f87223fff38 CR3: 00000007a7c6a002 CR4: 00000000007706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> ? __die_body.cold+0x19/0x27 ? die_addr+0x46/0x70 ? exc_general_protection+0x14f/0x250 ? asm_exc_general_protection+0x26/0x30 ? do_raw_read_unlock+0x44/0x70 ? btrfs_zone_finish_endio.part.0+0x34/0x160 [btrfs] btrfs_finish_one_ordered+0x5d9/0x19a0 [btrfs] ? __pfx_lock_release+0x10/0x10 ? do_raw_write_lock+0x90/0x260 ? __pfx_do_raw_write_lock+0x10/0x10 ? __pfx_btrfs_finish_one_ordered+0x10/0x10 [btrfs] ? _raw_write_unlock+0x23/0x40 ? btrfs_finish_ordered_zoned+0x5a9/0x850 [btrfs] ? lock_acquire+0x435/0x500 btrfs_work_helper+0x1b1/0xa70 [btrfs] ? __schedule+0x10a8/0x60b0 ? __pfx___might_resched+0x10/0x10 process_one_work+0x862/0x1410 ? __pfx_lock_acquire+0x10/0x10 ? __pfx_process_one_work+0x10/0x10 ? assign_work+0x16c/0x240 worker_thread+0x5e6/0x1010 ? __pfx_worker_thread+0x10/0x10 kthread+0x2c3/0x3a0 ? trace_irq_enable.constprop.0+0xce/0x110 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK> Enabling CONFIG_BTRFS_ASSERT revealed the following assertion to trigger: assertion failed: !list_empty(&ordered->list), in fs/btrfs/zoned.c:1815 This indicates, that we're missing the checksums list on the ordered_extent. As btrfs/167 is doing a NOCOW write this is to be expected. Further analysis with drgn confirmed the assumption: >>> inode = prog.crashed_thread().stack_trace()[11]['ordered'].inode >>> btrfs_inode = drgn.container_of(inode, "struct btrfs_inode", \ "vfs_inode") >>> print(btrfs_inode.flags) (u32)1 As zoned emulation mode simulates conventional zones on regular devices, we cannot use zone-append for writing. But we're only attaching dummy checksums if we're doing a zone-append write. So for NOCOW zoned data writes on conventional zones, also attach a dummy checksum. Reported-by: Shinichiro Kawasaki <[email protected]> Fixes: cbfce4c ("btrfs: optimize the logical to physical mapping for zoned writes") CC: Naohiro Aota <[email protected]> # 6.6+ Tested-by: Shin'ichiro Kawasaki <[email protected]> Reviewed-by: Naohiro Aota <[email protected]> Signed-off-by: Johannes Thumshirn <[email protected]> Reviewed-by: David Sterba <[email protected]> Signed-off-by: David Sterba <[email protected]> Signed-off-by: Sasha Levin <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
…s in tail_call This patch solves the 10 tail_call testing issues in test_bpf. At this point, all tests of test_bpf in BPF_JIT mode have passed. Here is the comparison between s64ilp32, s64lp64 and s32ilp32: - s64lp64 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 188 PASS test_bpf: #1 Tail call 2 jited:1 180 PASS test_bpf: #2 Tail call 3 jited:1 203 PASS test_bpf: #3 Tail call 4 jited:1 225 PASS test_bpf: #4 Tail call load/store leaf jited:1 145 PASS test_bpf: #5 Tail call load/store jited:1 195 PASS test_bpf: #6 Tail call error path, max count reached jited:1 997 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 155563 PASS test_bpf: #8 Tail call error path, NULL target jited:1 164 PASS test_bpf: #9 Tail call error path, index out of range jited:1 136 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s64ilp32 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 160 PASS test_bpf: #1 Tail call 2 jited:1 221 PASS test_bpf: #2 Tail call 3 jited:1 251 PASS test_bpf: #3 Tail call 4 jited:1 275 PASS test_bpf: #4 Tail call load/store leaf jited:1 198 PASS test_bpf: #5 Tail call load/store jited:1 262 PASS test_bpf: #6 Tail call error path, max count reached jited:1 1390 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 204492 PASS test_bpf: #8 Tail call error path, NULL target jited:1 199 PASS test_bpf: #9 Tail call error path, index out of range jited:1 168 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s32ilp32 ``` ... test_bpf: Summary: 1027 PASSED, 0 FAILED, [832/1015 JIT'ed] test_bpf: #0 Tail call leaf jited:1 266 PASS test_bpf: #1 Tail call 2 jited:1 409 PASS test_bpf: #2 Tail call 3 jited:1 481 PASS test_bpf: #3 Tail call 4 jited:1 537 PASS test_bpf: #4 Tail call load/store leaf jited:1 325 PASS test_bpf: #5 Tail call load/store jited:1 427 PASS test_bpf: #6 Tail call error path, max count reached jited:1 3050 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 255522 PASS test_bpf: #8 Tail call error path, NULL target jited:1 315 PASS test_bpf: #9 Tail call error path, index out of range jited:1 280 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` Actually, s64ilp32 and s64lp64 perform consistently, both in terms of the number that can be executed by JIT and execution time. while, only 80% of cases in s32ilp32 can be executed by JIT, and the execution time is also longer under the same JIT execution situation. Signed-off-by: Chen Pei <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
…s in tail_call This patch solves the 10 tail_call testing issues in test_bpf. At this point, all tests of test_bpf in BPF_JIT mode have passed. Here is the comparison between s64ilp32, s64lp64 and s32ilp32: - s64lp64 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 188 PASS test_bpf: #1 Tail call 2 jited:1 180 PASS test_bpf: #2 Tail call 3 jited:1 203 PASS test_bpf: #3 Tail call 4 jited:1 225 PASS test_bpf: #4 Tail call load/store leaf jited:1 145 PASS test_bpf: #5 Tail call load/store jited:1 195 PASS test_bpf: #6 Tail call error path, max count reached jited:1 997 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 155563 PASS test_bpf: #8 Tail call error path, NULL target jited:1 164 PASS test_bpf: #9 Tail call error path, index out of range jited:1 136 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s64ilp32 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 160 PASS test_bpf: #1 Tail call 2 jited:1 221 PASS test_bpf: #2 Tail call 3 jited:1 251 PASS test_bpf: #3 Tail call 4 jited:1 275 PASS test_bpf: #4 Tail call load/store leaf jited:1 198 PASS test_bpf: #5 Tail call load/store jited:1 262 PASS test_bpf: #6 Tail call error path, max count reached jited:1 1390 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 204492 PASS test_bpf: #8 Tail call error path, NULL target jited:1 199 PASS test_bpf: #9 Tail call error path, index out of range jited:1 168 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s32ilp32 ``` ... test_bpf: Summary: 1027 PASSED, 0 FAILED, [832/1015 JIT'ed] test_bpf: #0 Tail call leaf jited:1 266 PASS test_bpf: #1 Tail call 2 jited:1 409 PASS test_bpf: #2 Tail call 3 jited:1 481 PASS test_bpf: #3 Tail call 4 jited:1 537 PASS test_bpf: #4 Tail call load/store leaf jited:1 325 PASS test_bpf: #5 Tail call load/store jited:1 427 PASS test_bpf: #6 Tail call error path, max count reached jited:1 3050 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 255522 PASS test_bpf: #8 Tail call error path, NULL target jited:1 315 PASS test_bpf: #9 Tail call error path, index out of range jited:1 280 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` Actually, s64ilp32 and s64lp64 perform consistently, both in terms of the number that can be executed by JIT and execution time. while, only 80% of cases in s32ilp32 can be executed by JIT, and the execution time is also longer under the same JIT execution situation. Signed-off-by: Chen Pei <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 29, 2024
…s in tail_call This patch solves the 10 tail_call testing issues in test_bpf. At this point, all tests of test_bpf in BPF_JIT mode have passed. Here is the comparison between s64ilp32, s64lp64 and s32ilp32: - s64lp64 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 188 PASS test_bpf: #1 Tail call 2 jited:1 180 PASS test_bpf: #2 Tail call 3 jited:1 203 PASS test_bpf: #3 Tail call 4 jited:1 225 PASS test_bpf: #4 Tail call load/store leaf jited:1 145 PASS test_bpf: #5 Tail call load/store jited:1 195 PASS test_bpf: #6 Tail call error path, max count reached jited:1 997 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 155563 PASS test_bpf: #8 Tail call error path, NULL target jited:1 164 PASS test_bpf: #9 Tail call error path, index out of range jited:1 136 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s64ilp32 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 160 PASS test_bpf: #1 Tail call 2 jited:1 221 PASS test_bpf: #2 Tail call 3 jited:1 251 PASS test_bpf: #3 Tail call 4 jited:1 275 PASS test_bpf: #4 Tail call load/store leaf jited:1 198 PASS test_bpf: #5 Tail call load/store jited:1 262 PASS test_bpf: #6 Tail call error path, max count reached jited:1 1390 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 204492 PASS test_bpf: #8 Tail call error path, NULL target jited:1 199 PASS test_bpf: #9 Tail call error path, index out of range jited:1 168 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s32ilp32 ``` ... test_bpf: Summary: 1027 PASSED, 0 FAILED, [832/1015 JIT'ed] test_bpf: #0 Tail call leaf jited:1 266 PASS test_bpf: #1 Tail call 2 jited:1 409 PASS test_bpf: #2 Tail call 3 jited:1 481 PASS test_bpf: #3 Tail call 4 jited:1 537 PASS test_bpf: #4 Tail call load/store leaf jited:1 325 PASS test_bpf: #5 Tail call load/store jited:1 427 PASS test_bpf: #6 Tail call error path, max count reached jited:1 3050 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 255522 PASS test_bpf: #8 Tail call error path, NULL target jited:1 315 PASS test_bpf: #9 Tail call error path, index out of range jited:1 280 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` Actually, s64ilp32 and s64lp64 perform consistently, both in terms of the number that can be executed by JIT and execution time. while, only 80% of cases in s32ilp32 can be executed by JIT, and the execution time is also longer under the same JIT execution situation. Signed-off-by: Chen Pei <[email protected]>
RevySR
pushed a commit
that referenced
this pull request
Jun 30, 2024
…s in tail_call This patch solves the 10 tail_call testing issues in test_bpf. At this point, all tests of test_bpf in BPF_JIT mode have passed. Here is the comparison between s64ilp32, s64lp64 and s32ilp32: - s64lp64 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 188 PASS test_bpf: #1 Tail call 2 jited:1 180 PASS test_bpf: #2 Tail call 3 jited:1 203 PASS test_bpf: #3 Tail call 4 jited:1 225 PASS test_bpf: #4 Tail call load/store leaf jited:1 145 PASS test_bpf: #5 Tail call load/store jited:1 195 PASS test_bpf: #6 Tail call error path, max count reached jited:1 997 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 155563 PASS test_bpf: #8 Tail call error path, NULL target jited:1 164 PASS test_bpf: #9 Tail call error path, index out of range jited:1 136 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s64ilp32 ``` ... test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: #0 Tail call leaf jited:1 160 PASS test_bpf: #1 Tail call 2 jited:1 221 PASS test_bpf: #2 Tail call 3 jited:1 251 PASS test_bpf: #3 Tail call 4 jited:1 275 PASS test_bpf: #4 Tail call load/store leaf jited:1 198 PASS test_bpf: #5 Tail call load/store jited:1 262 PASS test_bpf: #6 Tail call error path, max count reached jited:1 1390 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 204492 PASS test_bpf: #8 Tail call error path, NULL target jited:1 199 PASS test_bpf: #9 Tail call error path, index out of range jited:1 168 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` - s32ilp32 ``` ... test_bpf: Summary: 1027 PASSED, 0 FAILED, [832/1015 JIT'ed] test_bpf: #0 Tail call leaf jited:1 266 PASS test_bpf: #1 Tail call 2 jited:1 409 PASS test_bpf: #2 Tail call 3 jited:1 481 PASS test_bpf: #3 Tail call 4 jited:1 537 PASS test_bpf: #4 Tail call load/store leaf jited:1 325 PASS test_bpf: #5 Tail call load/store jited:1 427 PASS test_bpf: #6 Tail call error path, max count reached jited:1 3050 PASS test_bpf: #7 Tail call count preserved across function calls jited:1 255522 PASS test_bpf: #8 Tail call error path, NULL target jited:1 315 PASS test_bpf: #9 Tail call error path, index out of range jited:1 280 PASS test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] ... test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED ``` Actually, s64ilp32 and s64lp64 perform consistently, both in terms of the number that can be executed by JIT and execution time. while, only 80% of cases in s32ilp32 can be executed by JIT, and the execution time is also longer under the same JIT execution situation. Signed-off-by: Chen Pei <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The k230 CLINT's phy_addr is 0xf04000000, and PLIC's phy_addr is 0xf00000000, and all are beyond 4GB out of the 32-bit range. MMU_SV39 in s64ilp32 could support the whole bits of PPN, which is beyond 32 bits. So, enable PHYS_ADDR_T_64BIT to use this hardware feature.
This patch only supports io_remap higher phy_addr, but the phy_addr of RAM must still kept in 4GB.