Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZIL: Second attempt to reduce scope of zl_issuer_lock. #15122

Merged
merged 1 commit into from
Aug 25, 2023

Conversation

amotin
Copy link
Member

@amotin amotin commented Jul 30, 2023

The previous patch #14841 appeared to have significant flaw, causing deadlocks if zl_get_data callback got blocked waiting for TXG sync. I already handled some of such cases in the original patch, but issue #14982 shown cases that were impossible to solve in that design.

This patch fixes the problem by postponing log blocks allocation till the very end, just before the zios issue, leaving nothing blocking after that point to cause deadlocks. Before that point though any sleeps are now allowed, not causing sync thread blockage. This require slightly more complicated lwb state machine to allocate blocks and issue zios in proper order. But with removal of special early issue workarounds the new code is much cleaner now, and should even be more efficient.

Since this patch uses null zios between write, I've found that null zios do not wait for logical children ready status in io_ready(), that makes parent write to proceed prematurely, producing incorrect log blocks. Added ZIO_CHILD_LOGICAL_BIT to zio_wait_for_children() fixes it.

How Has This Been Tested?

The patch successfully survives heavily parallel synchronous writes of VMware vMotion over iSCSI.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

module/zfs/zil.c Outdated Show resolved Hide resolved
Copy link
Member

@robn robn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me. The state transition sequence is pretty straightforward to follow and the use of null IOs to maintain that linear-ness is smart. I agree that the issuer lock in particular has a fairly narrow scope.

I have a test bench that I've been using in the past few months to explore the behaviour the ZIL in various "catastrophic" scenarios (eg multiple disks failing). Its been very good at finding locking issues, among other things. I've put this patch through some of the baseline tests which are mostly about hammering 12-wide RAIDz3s with hundreds of concurrent write()s and fsync()s, while doing horrible things to the pool. These tests didn't trigger any locking problems (even when IOs fail, disks fail, pools fail), which granted, is hard to make it happen in 2.1 as well, so I feel confident its at least not worse than 2.1 on that score. Average fsync() latency on these particular workloads is 15-20% improved over 2.1.

Sorry for being vague; I can't really talk about the details of what I'm working on yet. But at least, I feel pretty good that this is doing the right thing, and is quite a bit faster.

@dag-erling
Copy link
Contributor

This pull request fixes consistent, easily reproducible deadlocks I've been experiencing in FreeBSD 14 for the past two months (run this script to reproduce). Please get this merged ASAP.

@amotin
Copy link
Member Author

amotin commented Aug 21, 2023

Thanks @grwilson for finding one more peculiar deadlock scenario. Since this patch allows single zil_lwb_write_issue() to issue several ready LWBs, it may be that after issuing LWB required for the current zcw the thread may block working on following one while holding zcw_lock. The problem arise when the only one available per pool null interrupt taskq is getting blocked by the LWB ZIO completion, waiting for zcw_lock, while it is also needed to drop config lock, wanted by the first zil_lwb_write_issue(). To fix this deadlock I reduced scope of zcw_lock inside zil_commit_waiter_timeout(). It may cost us two more atomics per LWB, but I am going to avoid all this code path together for single-threaded workloads a bit later.

@grwilson
Copy link
Member

Testing the latest change to make sure it addresses the last deadlock

module/zfs/zil.c Outdated Show resolved Hide resolved
module/zfs/zil.c Outdated Show resolved Hide resolved
@@ -1041,7 +1061,8 @@ zil_destroy(zilog_t *zilog, boolean_t keep_first)
while ((lwb = list_remove_head(&zilog->zl_lwb_list)) != NULL) {
if (lwb->lwb_buf != NULL)
zio_buf_free(lwb->lwb_buf, lwb->lwb_sz);
zio_free(zilog->zl_spa, txg, &lwb->lwb_blk);
if (!BP_IS_HOLE(&lwb->lwb_blk))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original code didn't expect to see a HOLE. Can you add a comment that explains when we would end up having a HOLE on the lwb.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the old code every LWB always had a block pointer. New code allows to allocate number of LWBs to fill them in parallel before allocating them block pointers as part of zil_lwb_write_issue(). I am not sure I want to bloat this particular piece of code with this explanation.

module/zfs/zil.c Outdated
* first issue to parent IOs before waiting on the lock.
* The lwb is now ready to be issued, but it can be only if it already
* got its block pointer allocated or the allocation has failed.
* Otherwise leave it as-is, relying on some other thread to issue it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if the lwb_blk is a hole, then who will issue it? It might be good to expand on this comments since this seems like a relevant detail about how holes are issued. I'm assuming this is going to be a thread that is already issuing other lwbs but it would be good to explain that here.

Copy link
Member Author

@amotin amotin Aug 24, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've expanded the comment.

include/sys/zil_impl.h Outdated Show resolved Hide resolved
The previous patch openzfs#14841 appeared to have significant flaw, causing
deadlocks if zl_get_data callback got blocked waiting for TXG sync.  I
already handled some of such cases in the original patch, but issue
 openzfs#14982 shown cases that were impossible to solve in that design.

This patch fixes the problem by postponing log blocks allocation till
the very end, just before the zios issue, leaving nothing blocking after
that point to cause deadlocks.  Before that point though any sleeps are
now allowed, not causing sync thread blockage.  This require slightly
more complicated lwb state machine to allocate blocks and issue zios
in proper order.  But with removal of special early issue workarounds
the new code is much cleaner now, and should even be more efficient.

Since this patch uses null zios between write, I've found that null
zios do not wait for logical children ready status in zio_ready(),
that makes parent write to proceed prematurely, producing incorrect
log blocks.  Added ZIO_CHILD_LOGICAL_BIT to zio_wait_for_children()
fixes it.

Signed-off-by:	Alexander Motin <[email protected]>
Sponsored by:	iXsystems, Inc.
@behlendorf behlendorf merged commit eda3fcd into openzfs:master Aug 25, 2023
19 checks passed
@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Aug 25, 2023
amotin added a commit to amotin/zfs that referenced this pull request Aug 25, 2023
The previous patch openzfs#14841 appeared to have significant flaw, causing
deadlocks if zl_get_data callback got blocked waiting for TXG sync.  I
already handled some of such cases in the original patch, but issue
 openzfs#14982 shown cases that were impossible to solve in that design.

This patch fixes the problem by postponing log blocks allocation till
the very end, just before the zios issue, leaving nothing blocking after
that point to cause deadlocks.  Before that point though any sleeps are
now allowed, not causing sync thread blockage.  This require slightly
more complicated lwb state machine to allocate blocks and issue zios
in proper order.  But with removal of special early issue workarounds
the new code is much cleaner now, and should even be more efficient.

Since this patch uses null zios between write, I've found that null
zios do not wait for logical children ready status in zio_ready(),
that makes parent write to proceed prematurely, producing incorrect
log blocks.  Added ZIO_CHILD_LOGICAL_BIT to zio_wait_for_children()
fixes it.

Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Mark Maybee <[email protected]>
Reviewed-by: George Wilson <[email protected]>
Signed-off-by:	Alexander Motin <[email protected]>
Sponsored by:	iXsystems, Inc.
Closes openzfs#15122
@amotin amotin deleted the zil_lock2 branch August 25, 2023 00:56
behlendorf pushed a commit that referenced this pull request Aug 25, 2023
The previous patch #14841 appeared to have significant flaw, causing
deadlocks if zl_get_data callback got blocked waiting for TXG sync.  I
already handled some of such cases in the original patch, but issue
 #14982 shown cases that were impossible to solve in that design.

This patch fixes the problem by postponing log blocks allocation till
the very end, just before the zios issue, leaving nothing blocking after
that point to cause deadlocks.  Before that point though any sleeps are
now allowed, not causing sync thread blockage.  This require slightly
more complicated lwb state machine to allocate blocks and issue zios
in proper order.  But with removal of special early issue workarounds
the new code is much cleaner now, and should even be more efficient.

Since this patch uses null zios between write, I've found that null
zios do not wait for logical children ready status in zio_ready(),
that makes parent write to proceed prematurely, producing incorrect
log blocks.  Added ZIO_CHILD_LOGICAL_BIT to zio_wait_for_children()
fixes it.

Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Mark Maybee <[email protected]>
Reviewed-by: George Wilson <[email protected]>
Signed-off-by:	Alexander Motin <[email protected]>
Sponsored by:	iXsystems, Inc.
Closes #15122
lundman pushed a commit to openzfsonwindows/openzfs that referenced this pull request Dec 12, 2023
The previous patch openzfs#14841 appeared to have significant flaw, causing
deadlocks if zl_get_data callback got blocked waiting for TXG sync.  I
already handled some of such cases in the original patch, but issue
 openzfs#14982 shown cases that were impossible to solve in that design.

This patch fixes the problem by postponing log blocks allocation till
the very end, just before the zios issue, leaving nothing blocking after
that point to cause deadlocks.  Before that point though any sleeps are
now allowed, not causing sync thread blockage.  This require slightly
more complicated lwb state machine to allocate blocks and issue zios
in proper order.  But with removal of special early issue workarounds
the new code is much cleaner now, and should even be more efficient.

Since this patch uses null zios between write, I've found that null
zios do not wait for logical children ready status in zio_ready(),
that makes parent write to proceed prematurely, producing incorrect
log blocks.  Added ZIO_CHILD_LOGICAL_BIT to zio_wait_for_children()
fixes it.

Reviewed-by: Rob Norris <[email protected]>
Reviewed-by: Mark Maybee <[email protected]>
Reviewed-by: George Wilson <[email protected]>
Signed-off-by:	Alexander Motin <[email protected]>
Sponsored by:	iXsystems, Inc.
Closes openzfs#15122
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants