The Mnemonic Similarity Task began in 2007 (Kirwan & Stark, 2007) as a way to assess hippocampal function by using an object recognition memory task that consisted of studied targets, novel foils, and lures that had a controlled level of similarity (mnemonic similarity, not perceptual similarity) to studied items. This "classic version" has been coded up in many ways, including as a C++ stand-alone that many labs used for years to administer the task.
The goal of a number of studies in 2019-22 and one of the aims of an NIA grant (R01 AG066683-01) was to develop a more optimized version of the task that might be more suitable for clinical use. The results of these studies are published (Stark et al., 2023) and a web-based version (JATOS + jsPsych) is available for all to use. For those that do not want to setup their own web server, however, the JATOS setup is a bit cumbersome and the need for a stand-alone was apparent.
Briefly, the oMST is an continuous version of the MST using the old/similar/new response prompt and cutting down the stimuli to include 64 1st presentations, 20 repeated targets, and 44 similar lure trials. In addition, there is a guided instruction / practice task included and a perceptual control task. In total, it takes under half the time of the orignal task.
In addition, the oMST is multi-lingual, available in English, Chinese, Korean, Spanish, and Russian. Contact us if you're interested in helping port the text to other languages.
If you're not a developer, download the appropriate .dmg
or .exe
from the Release page. (Note, currently, the "universal" .dmg
is actually only Apple Silicon / ARM specific. Intel Macs should use the x64 version). These currently are not signed, so Windows Defender will likely complain. The first time its run on Windows, it's unpacking a bit, but nothing gets "installed", at least not system-wide. It does a local-user install into your AppData folder. On Macs, as it's not signed yet, double-clicking on the app icon will likely lead to macOS preventing it from running. Right-click (2 finger tap on the trackpad) and select Open (and yes, this is different than double-clicking) and it should run and not need this in the future. Feel free to drag to the Applications folder if you like as well.
The main screen that appears lets you enter in a study-ID and a participant-ID. These are used to save the data (to your desktop). When you type in the study-ID, any other parameters will be automatically restored from any prior runs with that same study-ID, so it's a good idea to set this first.
You'll notice a faint box in the upper-left of the screen that, when you mouse-over says "Experimenter view". Clicking on this reveals a new set of parameters. There you can set parameters and enable or disable each of the optional phases:
Optional phases:
- Consent: A simple consent form. You'll need to rebuild from the source to have a lab specific one
- Demographics: NIH style demographics form
- Perceptual control: A short perceptual / working memory control task that shows color objects, a noise mask, then either a repeat or a similar lure (matched to the MST's similarity).
- Instructions: Guided instruction phase for the oMST
Main options:
- Stimulus set: There are 6 main sets of 192 pairs of objects in the MST. Sets are reasonably well-matched for difficulty
- Sublist: The oMST does not use all 192 pairs and, in fact, each stimulus set can be divided into 3 subsets that have matched difficulty (matching based on the MST's "lure bins"). With 6 sets and 3 sublists per set, you get 18 possible independent tests
- Response mode: Button / touchscreen (touches are mouse clicks on the buttons) or keyboard responses?
- Language: What language should be used?
In addition, there are two more options:
- Two-choice: Use old/new rather than old/similar/new. Our studies have shown this is less effective / reliable than old/similar/new, but the option is here. We caution against using it and the instructions for this aren't complete.
- Self-paced: We use this in any somewhat impaired population. The image is on the screen for the same amount of time, but when the screen goes blank (as in the normal ISI) it remains on that blank screen awaiting a response rather than automatically moving on to the next trial. This keeps the memory aspects similar to the "normal" version but allows for slower motor components.
Your data will be saved to the Desktop in a folder for the study-ID. Within that, each participant-ID gets a folder and within that, each session gets a folder. Within that, files are stored with date-codes, so you should never overwrite any existing results.
Data are saved in both a .json
and a .csv
file. The .json
is a jsPsych data file output in all its glory. The .csv
is very much like the original stand-alone in that you've got multiple tasks in one text file with data in comma-separated format. Spreadsheet programs are perfectly happy with this, but you may want to cut/paste out the blocks you really need.
Mad props go to Audrey Hempel, who wrote the first version of the stand-alone task rather than having a relaxing summer break. That was the real heavy lift. After that, Craig Stark created this fork for further development of the project.
There are several technologies that make this work. At its core, this is an Electron application. Electron is a tool that essentially bundles a web browser and your JavaScript code into a single desktop application. Here, we took the oMST that typically runs in a web browser (via JATOS and jsPsych) as the basis and ported it into Honeycomb, a purpose-built template for this kind of thing (see below). We're using this here for stand-alone apps, but Honeycomb incorporates with things like PsiTurk and Firebase if you're into that sort of thing.
Honeycomb is a template for reproducible psychophysiological tasks for clinic, laboratory, and home use. It is maintained by members of the Center for Computation and Visualization and the Neuromotion Lab at Brown University. To learn about installation, usage and deployment please visit our documentation.
Note: previously named Neuro Task Starter, some references may still refer to it as such.
If you use Honeycomb in your work, please cite: