Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IMPORT moore_bergelson_2022 #114

Open
mzettersten opened this issue May 17, 2024 · 4 comments
Open

IMPORT moore_bergelson_2022 #114

mzettersten opened this issue May 17, 2024 · 4 comments
Assignees

Comments

@mzettersten
Copy link
Contributor

No description provided.

@adriansteffan
Copy link
Contributor

adriansteffan commented Jun 18, 2024

Checklist for code review v2024

To start:

  • Git pull this repo to get the latest version
  • Update your peekds and peekbankr to the latest version
    • Be sure to restart your R Session to apply these updates
  • Get the latest version of the dataset from osf (delete your raw_data, so that the script automatically downloads the data)
  • Run the import script
  • Does it run into issues due to missing libraries? during restructuring, import statements for libraries like janitor might have been lost in some datasets - re-add them if necessary
  • Does the validator complain about the processed data? Complain to Adrian (or fix the listed issues if you feel like it)

Common issues to check:

Trials

  • Are trials now unique between administrations?
  • Is exclusion info handled correctly? (look closely at how exclusion information is marked)

Trial Types

  • Check if the trial type IDs are created independently of administrations/subjects
  • Is vanilla_trial coded appropriately?

Stimuli

  • If the images are on osf, make sure the image path contains the image path on osf
  • Make sure each row represents a label-image association
    • the labels should be the words that the participants hear. For example, "apple" is okay, "red_apple_little" is wrong and was probably erroneously extracted from the file name
  • Are there items in the imported dataset not mentioned in the paper?
  • Are distractors represented correctly?
    • Special explanation for distractors: If an item only ever appeared in distractor position, it still gets its own row. The label is typically the label given to the image in the experiment description (e.g., "the distractor was an image of a chair"). If there is no obvious label provided in the experiment design, leave label blank.

Subjects

  • Does CDI data follow the new aux_data format?
  • Age rounded correctly? (decision: we do no rounding)

General

  • Double-check the citation and update it in the dataset table and make sure it’s consistent with the peekbank datasets google sheet: peekbank datasets
  • Are there any TODOs left in the code - resolve/double check
  • Review (or add) a ReadME (example)
    • Make sure any TO-DOs or other decision points in the comments of the code are documented in the ReadMe AND removed from the code to prevent ambiguity
  • General data sanity-checking (summary output helps here)
    • is there are the general numbers (e.g. # of participants, # of stimuli, average trials per administratoin) in the summary consistent with the paper? aoi_timepoints are hard to gague, but a super small number is probably bad
    • is the subject summary (age, sex distribution) approximately consistent with the paper? (note that it is not surprising if it is not identical - often we have a slightly different dataset and are not trying reproduce the exact numbers)
    • is the target side distribution skewed towards one side?
    • any weird trial durations?
    • do the cdi rawscore numbers match the instrument and measure?
    • is the exclusion % and the exclusion reasons sensible? (bearing in mind that we only have exclusion info for some datasets)
    • Inspect the timecourse and accuracy plots/output at the end of the import:
      • Compare timecourse patterns with paper (as best as possible)
      • Does the timing seem right? (accuracy spike later than PoD might be sensible, earlier is suspicious)
      • (if multiple conditions) Does the number conditions make sense in the context of the paper?
      • (if multiple conditions) Are the overall accuracies for conditions vastly different in a way not explained by the paper?
      • Any odd item-level patterns?
      • Any odd subject-level patterns?
    • Any large (unexpected) discrepancies between data reported in paper vs. data in the imported dataset?
  • After checking everything and rerunning the script: Upload the output to osf

@mzettersten
Copy link
Contributor Author

mzettersten commented Aug 28, 2024

@adriansteffan Things to check:

  • only verbs seem to make it to processed data - but there are also nouns according to paper abstract. Check where noun data is getting lost
  • CDI data is partially incorrectly handled - participants in experiment 2b completed the WS form - everybody else completed WG (tracks with participants having only prod vs. prod AND comp)

@adriansteffan
Copy link
Contributor

adriansteffan commented Aug 28, 2024

  1. Getting the nouns:

Data somewhere around here probably:
https://osf.io/q9uvd/
also here
https://osf.io/f6bdn/

the paper can be found here:
https://www.sciencedirect.com/science/article/abs/pii/S0749596X22000249?fr=RR-2

  1. check their analysis code for weird trial durations (the script claims no rezeroing needed, but trial times start at != 0)

  2. all of the verbs should ne non-vanilla

@adriansteffan adriansteffan self-assigned this Dec 6, 2024
@adriansteffan
Copy link
Contributor

nouns are imported, (non-)vanilla trials are correctly labeled, only the timing uncertainty for the verb trials remains

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants