-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Outcomes in WEB and R don't match #94
Comments
HI @tonisoto , thanks for raising this issue; I can replicate it. Let me pinpoint what's going wrong and get back ASAP! |
Thank you so much! You're very kind. I'm a novice in stats questions but
let me know if I can help.
Toni
El vie., 11 sept. 2020 10:04, Joses W. Ho <[email protected]>
escribió:
… HI @tonisoto <https://github.com/tonisoto> , thanks for raising this
issue; I can replicate it. Let me pinpoint what's going wrong and get back
ASAP!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#94 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABRS6LBWWFI3K3YSDXSKBMLSFHKYHANCNFSM4RFUVMNQ>
.
|
I wondered if this was solved as I have the same problem. In R: Paired mean difference of CS+ (n = 30) minus CS-(n = 30) 0.207 [95CI -2.04; 2.59] This is the raw data: Thanks, |
There is a bug in the way the confidence intervals for paired differences are computed. Currently now it seems the CIs for unpaired differences are returned instead. Specifically, it looks to be related to how stratified resampling is performed to ensure that we can resample from groups with different Ns. This is not an issue for UNpaired, of course, but it looks like there needs to be some refactoring done to handle paired differences, OR if anyone can point me to how to handle bootstrapping with with the I apologise for this inconvenience! edit: to clarify typo below. |
Thank you for the clarification! Unfortunately I am not able to help with the bug. Just a couple of other questions: So if I use the website, I should be fine, shouldn't I? I am asking because I have used it to make the figures of 2 recently published papers and I was planning to use it for my future papers, so I wanted to make sure I got a correct outcome. I have looked also at the MATLAB version of the code, but there doesn't seem a way to calculate CI for multi paired groups. "This is not an issue for paired, of course, ..." Do you mean it is not an issue for UNpaired? Thanks again |
You should be fine if you use the web app, yes! Also, yes, I meant unpaired, thanks for pointing it out! Have edited the typo. |
cf #107 |
Update: The dev version devtools::install_github("ACCLAB/dabestr", ref = "v0.3.9999") cf #99 |
It would be really great if this bugfix could be pushed into a release? Currently the release version should not be used for repeated measures analysis, while the bugfixed dev (0.3.9999) version is behind on many other aspects. Thanks for your help! |
Or is it believed that this has been addressed in the latest released version? My analysis shows this not to be the case using v2023.9.12. Using the same dataset: Mean diff v2023.9.12: -0.315 [-0.617 -0.0206] dev v0.3.9999 95% CIs match web, not v2023.9.12 |
I've also tried using the
version _2023.9.12, but, while this addresses many of the updates missing in v0.3.9999, this still gives me the same (incorrect) result as v2023.9.12 for the paired differences analysis: Mean diff 'dev' v_2023.9.12: -0.315 [-0.617 -0.0206] |
By chance I noticed that the outcomes of a paired (X and Y) mean difference that I had run in R (dabestr v0.3.0) and the one obtained after copy&paste and also upload to https://www.estimationstats.com/#/analyze/paired are quite different.
In R: Paired mean difference of X (n = 202) minus Y (n = 202) 0.161 [95CI -0.371; 0.713]
In WEB: The paired mean difference between X and Y is 0.161 [95.0%CI -0.0476, 0.367]
In both cases 202 paired observations were used (I attached the csv file). In WEB both variables (X, Y) were upload as expected in distinct columns and in R I prepared a tidy dataset as it is explained here.
As you can see the limits of the 95%CI are very different and I don't understand why. Which one is the correct outcome?
By the way.. I wonder to what extend the length of the 95%CI can be used to assess whether the scores of my two variables (X,Y) are 'equivalent' as it is done in TOST analysis (e.g. TOSTER R package). Let's suppose that I define that two scores in my research field are equivalent if their mean paired differences fall inside -0.5 and +0.5. If all I said is correct the outcome of the web [95.0%CI -0.0476, 0.367] would support equivalence but the outcome in R does not. Am I correct if I use the 95%CI of dabestr to assess equivalence or do I have to run specific tests for this?
Thanks you so much for your software. Actually, as soon as I was able to solve this issue I'm going to use it in my next paper. 👍
Best regards from Spain!
202_X_Y_paired_data.zip
The text was updated successfully, but these errors were encountered: