Replies: 4 comments 8 replies
-
I’m wondering if there’s any good way to do this aside from returning a special error when the duplicate job isn’t inserted. My fear is this would be a bit unwieldy to users needing to disambiguate this error from others that could happen when inserting 🤔 From what I’m reading right now it seems like the docs are definitely not clear what happens in this situation, we should probably explain that more clearly as a starting point. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure I love the idea of returning an error in this case because although it's a little exceptional maybe, having a job skipped as duplicate doesn't necessarily mean that anything untowards happened. Most users probably won't care about about the insert being skipped (as that's the purpose of the unique system), but will be checking the result with the normal I think what we maybe should've done is having the insert functions return something like an The ship's probably sailed on that level of API change already though, so I wonder if what we could do is invert that design? So something like:
The advantage of this design compared to putting Alternatively, I could also see adding a property like
Being a function compared to a normal property would flag that it's "special" and not like the other job properties that are more inherent to the job record. @bgentry Thoughts? |
Beta Was this translation helpful? Give feedback.
-
What about just creating a public-facing type JobInsertResult struct {
UniqueSkippedAsDuplicate bool
Job JobRow
} this should make the new property "out of band" (brought up in #142), and would make adding new such properties easier in the future |
Beta Was this translation helpful? Give feedback.
-
Hate to be a bother here, but are there any plans for how to implement this before the API turns "really stable"? Just getting a bit nervous here because the I liked @bgentry's idea with the wrapper struct above; I think it would nicely address the concerns noted in the approach I suggested in #142. Are there any "blockers" left that would prevent this from being implemented? |
Beta Was this translation helpful? Give feedback.
-
Hi, first of all, I love this project, nice work!
I have a setup where I am periodically running checks that might trigger River jobs. So, every 15 minutes, there is the possibility that a bunch of jobs will be started in a batch.
However, these are long-running jobs, and they frequently take longer than the check interval.
So, after running for 15 minutes, the checker logic would discover the same situation as on the previous run and trigger a job that is already running.
The solution to that is easy enough: I use a uniqueness constraint to ensure that only one of these jobs (with the same params) can be run at a time.
Now, the only issue I have – as silly as it sounds – is with logging. With each checker run, I log an overview of all the jobs that have been triggered and for what reason. The problem is that I cannot distinguish between a "successful" job insert and one that was ignored due to the uniqueness constraint. This makes my log outputs confusing because it is a mix of already-running jobs and jobs that were actually started on this run.
Is there a way to know that a job was actually inserted?
I actually just noticed the
UniqueSkippedAsDuplicate
field onJobInsertResult
; this seems to be exactly what I am looking for!Unfortunately, that value does not seem to get propagated through to the "external" API.
Is it somehow possible to work around this, or to include this in the external API?
Beta Was this translation helpful? Give feedback.
All reactions