Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NAS-124109 / 24.04 / Merge v6.1.50 to add support for AVX extensions #120

Merged
merged 124 commits into from
Sep 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
124 commits
Select commit Hold shift + click to select a range
4a289d1
NFSv4.2: fix error handling in nfs42_proc_getxattr
Jul 25, 2023
d9aac9c
NFSv4: fix out path in __nfs4_get_acl_uncached
Jul 25, 2023
26ea866
xprtrdma: Remap Receive buffers after a reconnect
chucklever Jul 3, 2023
cd1f889
drm/ast: Use drm_aperture_remove_conflicting_pci_framebuffers
danvet Jan 11, 2023
6db53af
fbdev/radeon: use pci aperture helpers
danvet Jan 11, 2023
cccfcbb
drm/gma500: Use drm_aperture_remove_conflicting_pci_framebuffers
danvet Apr 6, 2023
437e99f
drm/aperture: Remove primary argument
danvet Apr 6, 2023
4aad3b8
video/aperture: Only kick vgacon when the pdev is decoding vga
danvet Apr 6, 2023
2891692
video/aperture: Move vga handling to pci function
danvet Apr 6, 2023
3e4d038
PCI: acpiphp: Reassign resources on bridge if necessary
Apr 24, 2023
92c568c
MIPS: cpu-features: Enable octeon_cache by cpu_type
FlyGoat Apr 4, 2023
1fa68a7
MIPS: cpu-features: Use boot_cpu_type for CPU type based features
FlyGoat Jun 7, 2023
8168c96
jbd2: remove t_checkpoint_io_list
zhangyi089 Jun 6, 2023
5fda50e
jbd2: remove journal_clean_one_cp_list()
zhangyi089 Jun 6, 2023
e5c768d
jbd2: fix a race when checking checkpoint buffer busy
zhangyi089 Jun 6, 2023
335987e
can: raw: fix receiver memory leak
Jul 11, 2023
40dafca
can: raw: fix lockdep issue in raw_release()
Jul 20, 2023
246d763
s390/zcrypt: remove unnecessary (void *) conversions
yuzhenfschina Mar 3, 2023
d4f5dcf
s390/zcrypt: fix reply buffer calculations for CCA replies
hfreude Jul 17, 2023
c23126f
drm/i915: Add the gen12_needs_ccs_aux_inv helper
Jul 25, 2023
017d440
drm/i915/gt: Ensure memory quiesced before invalidation
Jonathan-Cavitt Jul 25, 2023
8e3f138
drm/i915/gt: Poll aux invalidation register bit on invalidation
Jonathan-Cavitt Jul 25, 2023
7e862cc
drm/i915/gt: Support aux invalidation on all engines
Jul 25, 2023
7d0c2b0
tracing: Fix cpu buffers unavailable due to 'record_disabled' missed
Aug 5, 2023
2cb0c03
tracing: Fix memleak due to race between current_tracer and trace
Aug 17, 2023
eaeef5c
octeontx2-af: SDP: fix receive link config
Aug 17, 2023
1375d20
devlink: move code to a dedicated directory
kuba-moo Jan 5, 2023
b701b8d
devlink: add missing unregister linecard notification
Aug 17, 2023
cfee179
net: dsa: felix: fix oversize frame dropping for always closed tc-tap…
vladimiroltean Aug 17, 2023
b516a24
sock: annotate data-races around prot->memory_pressure
Aug 18, 2023
265ed38
dccp: annotate data-races in dccp_poll()
Aug 18, 2023
4496f6c
ipvlan: Fix a reference count leak warning in ipvlan_ns_exit()
Aug 17, 2023
22f9b54
mlxsw: pci: Set time stamp fields also when its type is MIRROR_UTC
daniellerts Aug 17, 2023
7134565
mlxsw: reg: Fix SSPR register layout
idosch Aug 17, 2023
1288f99
mlxsw: Fix the size of 'VIRT_ROUTER_MSB'
Aug 17, 2023
c663607
selftests: mlxsw: Fix test failure on Spectrum-4
idosch Aug 17, 2023
ac25925
net: dsa: mt7530: fix handling of 802.1X PAE frames
arinc9 Aug 13, 2023
029e491
net: bgmac: Fix return value check for fixed_phy_register()
Aug 18, 2023
afc9d3d
net: bcmgenet: Fix return value check for fixed_phy_register()
Aug 18, 2023
4af1fe6
net: validate veth and vxcan peer ifindexes
kuba-moo Aug 19, 2023
417e7ec
ipv4: fix data-races around inet->inet_id
Aug 19, 2023
1188e9d
ice: fix receive buffer size miscalculation
jbrandeb Aug 10, 2023
7cddaed
Revert "ice: Fix ice VF reset during iavf initialization"
orosp Aug 11, 2023
850e232
ice: Fix NULL pointer deref during VF reset
orosp Aug 11, 2023
f41781b
selftests: bonding: do not set port down before adding to bond
liuhangbin Aug 17, 2023
39d43b9
can: isotp: fix support for transmission of SF without flow control
hartkopp Aug 21, 2023
9b7fd6b
igb: Avoid starting unnecessary workqueues
abogani Aug 21, 2023
f94f30e
igc: Fix the typo in the PTM Control macro
aneftin Aug 21, 2023
5816688
net/sched: fix a qdisc modification with ambiguous command request
jhsmt Aug 22, 2023
1368619
i40e: fix potential NULL pointer dereferencing of pf->vf i40e_sync_vs…
CuriousPanCake Aug 22, 2023
41841b5
netfilter: nf_tables: flush pending destroy work before netlink notifier
ummakynes Aug 17, 2023
ed3fe5f
netfilter: nf_tables: fix out of memory error handling
Aug 22, 2023
b15dea3
rtnetlink: Reject negative ifindexes in RTM_NEWLINK
idosch Aug 23, 2023
a0559fd
bonding: fix macvlan over alb bond support
liuhangbin Aug 23, 2023
2800385
KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated
sean-jc Apr 26, 2023
82d811f
KVM: x86/mmu: Fix an sign-extension bug with mmu_seq that hangs vCPUs
sean-jc Aug 24, 2023
0d617fb
io_uring: get rid of double locking
isilence Dec 7, 2022
4f59375
io_uring: extract a io_msg_install_complete helper
isilence Dec 7, 2022
816c7ce
io_uring/msg_ring: move double lock/unlock helpers higher up
axboe Jan 19, 2023
22a406b
io_uring/msg_ring: fix missing lock on overflow for IOPOLL
axboe Aug 23, 2023
014fec5
ASoC: amd: yc: Add VivoBook Pro 15 to quirks list for acp6x
BrenoRCBrito Aug 18, 2023
85607ef
ASoC: cs35l41: Correct amp_gain_tlv values
charleskeepax Aug 23, 2023
b8b7243
ibmveth: Use dcbf rather than dcbfl
mpe Aug 23, 2023
e6a60ec
wifi: mac80211: limit reorder_buf_filtered to avoid UBSAN warning
Aug 18, 2023
ac467d7
platform/x86: ideapad-laptop: Add support for new hotkeys found on Th…
Aug 19, 2023
14904f4
NFSv4: Fix dropped lock for racing OPEN and delegation return
Jun 30, 2023
a7d1722
clk: Fix slab-out-of-bounds error in devm_clk_release()
AndreySV Aug 5, 2023
091591f
mm,ima,kexec,of: use memblock_free_late from ima_free_kexec_buffer
rikvanriel Aug 17, 2023
d13f3a6
shmem: fix smaps BUG sleeping while atomic
Aug 23, 2023
d4e11b8
ALSA: ymfpci: Fix the missing snd_card_free() call at probe error
tiwai Aug 23, 2023
a8a60bc
mm/gup: handle cont-PTE hugetlb pages correctly in gup_must_unshare()…
davidhildenbrand Aug 5, 2023
07fad41
mm: add a call to flush_cache_vmap() in vmap_pfn()
Aug 9, 2023
bdc544a
mm: memory-failure: fix unexpected return value in soft_offline_page()
MiaoheLin Jun 27, 2023
96fb46e
NFS: Fix a use after free in nfs_direct_join_group()
Aug 9, 2023
36c5aec
nfsd: Fix race to FREE_STATEID and cl_revoked
Aug 4, 2023
d6b64d7
selinux: set next pointer before attaching to list
cgzones Aug 18, 2023
efef746
batman-adv: Trigger events for auto adjusted MTU
ecsv Jul 19, 2023
ed1eb19
batman-adv: Don't increase MTU when set by user
ecsv Jul 19, 2023
fc9b87d
batman-adv: Do not get eth header before batadv_check_management_packet
repk Jul 28, 2023
f1bead9
batman-adv: Fix TT global entry leak when client roamed back
repk Aug 4, 2023
cb1f73e
batman-adv: Fix batadv_v_ogm_aggr_send memory leak
repk Aug 9, 2023
82bb5f8
batman-adv: Hold rtnl lock during MTU update via netlink
ecsv Aug 21, 2023
30ffd58
lib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels
hdeller Aug 25, 2023
3383597
riscv: Handle zicsr/zifencei issue between gcc and binutils
xmzzz Aug 9, 2023
aa096bc
riscv: Fix build errors using binutils2.37 toolchains
xmzzz Aug 24, 2023
e75de82
radix tree: remove unused variable
arndb Aug 11, 2023
2d00ca9
of: unittest: Fix EXPECT for parse_phandle_with_args_map() test
robherring Aug 18, 2023
c6b7d89
of: dynamic: Refactor action prints to not use "%pOF" inside devtree_…
robherring Aug 18, 2023
4919043
pinctrl: amd: Mask wake bits on probe again
superm1 Aug 18, 2023
fe04122
media: vcodec: Fix potential array out-of-bounds in encoder queue_setup
harperchen Aug 10, 2023
1900e19
PCI: acpiphp: Use pci_assign_unassigned_bridge_resources() only for n…
Jul 26, 2023
115f2cc
drm/vmwgfx: Fix shader stage validation
zackr Jun 16, 2023
3abffee
drm/i915/dgfx: Enable d3cold at s2idle
anshuma1 Aug 16, 2023
3bc9b03
drm/display/dp: Fix the DP DSC Receiver cap size
aknautiyal Aug 18, 2023
6bcb9c7
x86/fpu: Invalidate FPU state correctly on exec()
rpedgeco Aug 18, 2023
d8f9a9c
x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4
ftang1 Aug 23, 2023
f1fa6e6
hwmon: (aquacomputer_d5next) Add selective 200ms delay after sending …
aleksamagicka Aug 7, 2023
a0ec52f
selftests/net: mv bpf/nat6to4.c to net folder
liuhangbin Jan 18, 2023
362ed5d
nfs: use vfs setgid helper
brauner Mar 14, 2023
ce59b7c
nfsd: use vfs setgid helper
brauner May 2, 2023
7030fbf
cgroup/cpuset: Rename functions dealing with DEADLINE accounting
jlelli Aug 20, 2023
9bcfe15
sched/cpuset: Bring back cpuset_mutex
jlelli Aug 20, 2023
d1b4262
sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets
jlelli Aug 20, 2023
064b960
cgroup/cpuset: Iterate only if DEADLINE tasks are present
jlelli Aug 20, 2023
f013513
sched/deadline: Create DL BW alloc, free & check overflow interface
deggeman Aug 20, 2023
d3ff670
cgroup/cpuset: Free DL BW in case can_attach() fails
deggeman Aug 20, 2023
f016326
thunderbolt: Fix Thunderbolt 3 display flickering issue on 2nd hot pl…
Aug 2, 2023
b7803af
ublk: remove check IO_URING_F_SQE128 in ublk_ch_uring_cmd
Feb 20, 2023
f67e3a7
can: raw: add missing refcount for memory leak fix
hartkopp Aug 21, 2023
bd20e20
madvise:madvise_free_pte_range(): don't use mapcount() against large …
fyin1 Aug 8, 2023
774cb3d
scsi: snic: Fix double free in snic_tgt_create()
PeterZhu789 Aug 19, 2023
7046115
scsi: core: raid_class: Remove raid_component_add()
PeterZhu789 Aug 22, 2023
0ba9a24
clk: Fix undefined reference to `clk_rate_exclusive_{get,put}'
Jul 25, 2023
4a75bf3
pinctrl: renesas: rzg2l: Fix NULL pointer dereference in rzg2l_dt_sub…
Aug 15, 2023
3fb1b95
pinctrl: renesas: rzv2m: Fix NULL pointer dereference in rzv2m_dt_sub…
Aug 15, 2023
6ed06b9
pinctrl: renesas: rza2: Add lock around pinctrl_generic{{add,remove}_…
Aug 15, 2023
3282e79
dma-buf/sw_sync: Avoid recursive lock during fence signal
robclark Aug 18, 2023
3c839f8
gpio: sim: dispose of irq mappings before destroying the irq_sim domain
Aug 22, 2023
d10ab99
gpio: sim: pass the GPIO device's software node to irq domain
Aug 22, 2023
936cf79
ASoC: amd: yc: Fix a non-functional mic on Lenovo 82SJ
superm1 Aug 24, 2023
9d5a3b4
maple_tree: disable mas_wr_append() when other readers are possible
howlett Aug 19, 2023
19641b9
ASoC: amd: vangogh: select CONFIG_SND_AMD_ACP_CONFIG
arndb Jun 5, 2023
a2943d2
Linux 6.1.50
gregkh Aug 30, 2023
f125942
Merge tag 'v6.1.50' into nwrelmrg
usaleem-ix Sep 14, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -6027,7 +6027,7 @@ S: Supported
F: Documentation/networking/devlink
F: include/net/devlink.h
F: include/uapi/linux/devlink.h
F: net/core/devlink.c
F: net/devlink/

DH ELECTRONICS IMX6 DHCOM BOARD SUPPORT
M: Christoph Niedermaier <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 6
PATCHLEVEL = 1
SUBLEVEL = 49
SUBLEVEL = 50
NAME = Curry Ramen

ifndef EXTRAVERSION
Expand Down
21 changes: 19 additions & 2 deletions arch/mips/include/asm/cpu-features.h
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,24 @@
#define cpu_has_4k_cache __isa_ge_or_opt(1, MIPS_CPU_4K_CACHE)
#endif
#ifndef cpu_has_octeon_cache
#define cpu_has_octeon_cache 0
#define cpu_has_octeon_cache \
({ \
int __res; \
\
switch (boot_cpu_type()) { \
case CPU_CAVIUM_OCTEON: \
case CPU_CAVIUM_OCTEON_PLUS: \
case CPU_CAVIUM_OCTEON2: \
case CPU_CAVIUM_OCTEON3: \
__res = 1; \
break; \
\
default: \
__res = 0; \
} \
\
__res; \
})
#endif
/* Don't override `cpu_has_fpu' to 1 or the "nofpu" option won't work. */
#ifndef cpu_has_fpu
Expand Down Expand Up @@ -351,7 +368,7 @@
({ \
int __res; \
\
switch (current_cpu_type()) { \
switch (boot_cpu_type()) { \
case CPU_M14KC: \
case CPU_74K: \
case CPU_1074K: \
Expand Down
28 changes: 17 additions & 11 deletions arch/riscv/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -447,24 +447,30 @@ config TOOLCHAIN_HAS_ZIHINTPAUSE
config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
def_bool y
# https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc
depends on AS_IS_GNU && AS_VERSION >= 23800
# https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=98416dbb0a62579d4a7a4a76bab51b5b52fec2cd
depends on AS_IS_GNU && AS_VERSION >= 23600
help
Newer binutils versions default to ISA spec version 20191213 which
moves some instructions from the I extension to the Zicsr and Zifencei
extensions.
Binutils-2.38 and GCC-12.1.0 bumped the default ISA spec to the newer
20191213 version, which moves some instructions from the I extension to
the Zicsr and Zifencei extensions. This requires explicitly specifying
Zicsr and Zifencei when binutils >= 2.38 or GCC >= 12.1.0. Zicsr
and Zifencei are supported in binutils from version 2.36 onwards.
To make life easier, and avoid forcing toolchains that default to a
newer ISA spec to version 2.2, relax the check to binutils >= 2.36.
For clang < 17 or GCC < 11.3.0, for which this is not possible or need
special treatment, this is dealt with in TOOLCHAIN_NEEDS_OLD_ISA_SPEC.

config TOOLCHAIN_NEEDS_OLD_ISA_SPEC
def_bool y
depends on TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
# https://github.com/llvm/llvm-project/commit/22e199e6afb1263c943c0c0d4498694e15bf8a16
depends on CC_IS_CLANG && CLANG_VERSION < 170000
# https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d29f5d6ab513c52fd872f532c492e35ae9fd6671
depends on (CC_IS_CLANG && CLANG_VERSION < 170000) || (CC_IS_GCC && GCC_VERSION < 110300)
help
Certain versions of clang do not support zicsr and zifencei via -march
but newer versions of binutils require it for the reasons noted in the
help text of CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI. This
option causes an older ISA spec compatible with these older versions
of clang to be passed to GAS, which has the same result as passing zicsr
and zifencei to -march.
Certain versions of clang and GCC do not support zicsr and zifencei via
-march. This option causes an older ISA spec compatible with these older
versions of clang and GCC to be passed to GAS, which has the same result
as passing zicsr and zifencei to -march.

config FPU
bool "FPU support"
Expand Down
8 changes: 7 additions & 1 deletion arch/riscv/kernel/compat_vdso/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,13 @@ compat_vdso-syms += flush_icache
COMPAT_CC := $(CC)
COMPAT_LD := $(LD)

COMPAT_CC_FLAGS := -march=rv32g -mabi=ilp32
# binutils 2.35 does not support the zifencei extension, but in the ISA
# spec 20191213, G stands for IMAFD_ZICSR_ZIFENCEI.
ifdef CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
COMPAT_CC_FLAGS := -march=rv32g -mabi=ilp32
else
COMPAT_CC_FLAGS := -march=rv32imafd -mabi=ilp32
endif
COMPAT_LD_FLAGS := -melf32lriscv

# Disable attributes, as they're useless and break the build.
Expand Down
3 changes: 1 addition & 2 deletions arch/x86/kernel/fpu/context.h
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@
* FPU state for a task MUST let the rest of the kernel know that the
* FPU registers are no longer valid for this task.
*
* Either one of these invalidation functions is enough. Invalidate
* a resource you control: CPU if using the CPU for something else
* Invalidate a resource you control: CPU if using the CPU for something else
* (with preemption disabled), FPU for the current task, or a task that
* is prevented from running by the current task.
*/
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/fpu/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -679,7 +679,7 @@ static void fpu_reset_fpregs(void)
struct fpu *fpu = &current->thread.fpu;

fpregs_lock();
fpu__drop(fpu);
__fpu_invalidate_fpregs_state(fpu);
/*
* This does not change the actual hardware registers. It just
* resets the memory image and sets TIF_NEED_FPU_LOAD so a
Expand Down
7 changes: 7 additions & 0 deletions arch/x86/kernel/fpu/xstate.c
Original file line number Diff line number Diff line change
Expand Up @@ -882,6 +882,13 @@ void __init fpu__init_system_xstate(unsigned int legacy_size)
goto out_disable;
}

/*
* CPU capabilities initialization runs before FPU init. So
* X86_FEATURE_OSXSAVE is not set. Now that XSAVE is completely
* functional, set the feature bit so depending code works.
*/
setup_force_cpu_cap(X86_FEATURE_OSXSAVE);

print_xstate_offset_size();
pr_info("x86/fpu: Enabled xstate features 0x%llx, context size is %d bytes, using '%s' format.\n",
fpu_kernel_cfg.max_features,
Expand Down
3 changes: 2 additions & 1 deletion arch/x86/kvm/mmu/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -4212,7 +4212,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
* root was invalidated by a memslot update or a relevant mmu_notifier fired.
*/
static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
struct kvm_page_fault *fault, int mmu_seq)
struct kvm_page_fault *fault,
unsigned long mmu_seq)
{
struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root.hpa);

Expand Down
121 changes: 56 additions & 65 deletions arch/x86/kvm/mmu/tdp_mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,17 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
if (!kvm->arch.tdp_mmu_enabled)
return;

/* Also waits for any queued work items. */
/*
* Invalidate all roots, which besides the obvious, schedules all roots
* for zapping and thus puts the TDP MMU's reference to each root, i.e.
* ultimately frees all roots.
*/
kvm_tdp_mmu_invalidate_all_roots(kvm);

/*
* Destroying a workqueue also first flushes the workqueue, i.e. no
* need to invoke kvm_tdp_mmu_zap_invalidated_roots().
*/
destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);

WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
Expand Down Expand Up @@ -127,16 +137,6 @@ static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root
queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
}

static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page)
{
union kvm_mmu_page_role role = page->role;
role.invalid = true;

/* No need to use cmpxchg, only the invalid bit can change. */
role.word = xchg(&page->role.word, role.word);
return role.invalid;
}

void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared)
{
Expand All @@ -145,45 +145,12 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
if (!refcount_dec_and_test(&root->tdp_mmu_root_count))
return;

WARN_ON(!root->tdp_mmu_page);

/*
* The root now has refcount=0. It is valid, but readers already
* cannot acquire a reference to it because kvm_tdp_mmu_get_root()
* rejects it. This remains true for the rest of the execution
* of this function, because readers visit valid roots only
* (except for tdp_mmu_zap_root_work(), which however
* does not acquire any reference itself).
*
* Even though there are flows that need to visit all roots for
* correctness, they all take mmu_lock for write, so they cannot yet
* run concurrently. The same is true after kvm_tdp_root_mark_invalid,
* since the root still has refcount=0.
*
* However, tdp_mmu_zap_root can yield, and writers do not expect to
* see refcount=0 (see for example kvm_tdp_mmu_invalidate_all_roots()).
* So the root temporarily gets an extra reference, going to refcount=1
* while staying invalid. Readers still cannot acquire any reference;
* but writers are now allowed to run if tdp_mmu_zap_root yields and
* they might take an extra reference if they themselves yield.
* Therefore, when the reference is given back by the worker,
* there is no guarantee that the refcount is still 1. If not, whoever
* puts the last reference will free the page, but they will not have to
* zap the root because a root cannot go from invalid to valid.
* The TDP MMU itself holds a reference to each root until the root is
* explicitly invalidated, i.e. the final reference should be never be
* put for a valid root.
*/
if (!kvm_tdp_root_mark_invalid(root)) {
refcount_set(&root->tdp_mmu_root_count, 1);

/*
* Zapping the root in a worker is not just "nice to have";
* it is required because kvm_tdp_mmu_invalidate_all_roots()
* skips already-invalid roots. If kvm_tdp_mmu_put_root() did
* not add the root to the workqueue, kvm_tdp_mmu_zap_all_fast()
* might return with some roots not zapped yet.
*/
tdp_mmu_schedule_zap_root(kvm, root);
return;
}
KVM_BUG_ON(!is_tdp_mmu_page(root) || !root->role.invalid, kvm);

spin_lock(&kvm->arch.tdp_mmu_pages_lock);
list_del_rcu(&root->link);
Expand Down Expand Up @@ -329,7 +296,14 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
root = tdp_mmu_alloc_sp(vcpu);
tdp_mmu_init_sp(root, NULL, 0, role);

refcount_set(&root->tdp_mmu_root_count, 1);
/*
* TDP MMU roots are kept until they are explicitly invalidated, either
* by a memslot update or by the destruction of the VM. Initialize the
* refcount to two; one reference for the vCPU, and one reference for
* the TDP MMU itself, which is held until the root is invalidated and
* is ultimately put by tdp_mmu_zap_root_work().
*/
refcount_set(&root->tdp_mmu_root_count, 2);

spin_lock(&kvm->arch.tdp_mmu_pages_lock);
list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots);
Expand Down Expand Up @@ -1027,32 +1001,49 @@ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
/*
* Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that
* is about to be zapped, e.g. in response to a memslots update. The actual
* zapping is performed asynchronously, so a reference is taken on all roots.
* Using a separate workqueue makes it easy to ensure that the destruction is
* performed before the "fast zap" completes, without keeping a separate list
* of invalidated roots; the list is effectively the list of work items in
* the workqueue.
*
* Get a reference even if the root is already invalid, the asynchronous worker
* assumes it was gifted a reference to the root it processes. Because mmu_lock
* is held for write, it should be impossible to observe a root with zero refcount,
* i.e. the list of roots cannot be stale.
* zapping is performed asynchronously. Using a separate workqueue makes it
* easy to ensure that the destruction is performed before the "fast zap"
* completes, without keeping a separate list of invalidated roots; the list is
* effectively the list of work items in the workqueue.
*
* This has essentially the same effect for the TDP MMU
* as updating mmu_valid_gen does for the shadow MMU.
* Note, the asynchronous worker is gifted the TDP MMU's reference.
* See kvm_tdp_mmu_get_vcpu_root_hpa().
*/
void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
{
struct kvm_mmu_page *root;

lockdep_assert_held_write(&kvm->mmu_lock);
list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
if (!root->role.invalid &&
!WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) {
/*
* mmu_lock must be held for write to ensure that a root doesn't become
* invalid while there are active readers (invalidating a root while
* there are active readers may or may not be problematic in practice,
* but it's uncharted territory and not supported).
*
* Waive the assertion if there are no users of @kvm, i.e. the VM is
* being destroyed after all references have been put, or if no vCPUs
* have been created (which means there are no roots), i.e. the VM is
* being destroyed in an error path of KVM_CREATE_VM.
*/
if (IS_ENABLED(CONFIG_PROVE_LOCKING) &&
refcount_read(&kvm->users_count) && kvm->created_vcpus)
lockdep_assert_held_write(&kvm->mmu_lock);

/*
* As above, mmu_lock isn't held when destroying the VM! There can't
* be other references to @kvm, i.e. nothing else can invalidate roots
* or be consuming roots, but walking the list of roots does need to be
* guarded against roots being deleted by the asynchronous zap worker.
*/
rcu_read_lock();

list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) {
if (!root->role.invalid) {
root->role.invalid = true;
tdp_mmu_schedule_zap_root(kvm, root);
}
}

rcu_read_unlock();
}

/*
Expand Down
3 changes: 0 additions & 3 deletions drivers/block/ublk_drv.c
Original file line number Diff line number Diff line change
Expand Up @@ -1223,9 +1223,6 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
__func__, cmd->cmd_op, ub_cmd->q_id, tag,
ub_cmd->result);

if (!(issue_flags & IO_URING_F_SQE128))
goto out;

if (ub_cmd->q_id >= ub->dev_info.nr_hw_queues)
goto out;

Expand Down
13 changes: 7 additions & 6 deletions drivers/clk/clk-devres.c
Original file line number Diff line number Diff line change
Expand Up @@ -205,18 +205,19 @@ EXPORT_SYMBOL(devm_clk_put);
struct clk *devm_get_clk_from_child(struct device *dev,
struct device_node *np, const char *con_id)
{
struct clk **ptr, *clk;
struct devm_clk_state *state;
struct clk *clk;

ptr = devres_alloc(devm_clk_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
state = devres_alloc(devm_clk_release, sizeof(*state), GFP_KERNEL);
if (!state)
return ERR_PTR(-ENOMEM);

clk = of_clk_get_by_name(np, con_id);
if (!IS_ERR(clk)) {
*ptr = clk;
devres_add(dev, ptr);
state->clk = clk;
devres_add(dev, state);
} else {
devres_free(ptr);
devres_free(state);
}

return clk;
Expand Down
18 changes: 9 additions & 9 deletions drivers/dma-buf/sw_sync.c
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
*/
static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
{
LIST_HEAD(signalled);
struct sync_pt *pt, *next;

trace_sync_timeline(obj);
Expand All @@ -203,21 +204,20 @@ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
if (!timeline_fence_signaled(&pt->base))
break;

list_del_init(&pt->link);
dma_fence_get(&pt->base);

list_move_tail(&pt->link, &signalled);
rb_erase(&pt->node, &obj->pt_tree);

/*
* A signal callback may release the last reference to this
* fence, causing it to be freed. That operation has to be
* last to avoid a use after free inside this loop, and must
* be after we remove the fence from the timeline in order to
* prevent deadlocking on timeline->lock inside
* timeline_fence_release().
*/
dma_fence_signal_locked(&pt->base);
}

spin_unlock_irq(&obj->lock);

list_for_each_entry_safe(pt, next, &signalled, link) {
list_del_init(&pt->link);
dma_fence_put(&pt->base);
}
}

/**
Expand Down
Loading