diff --git a/manuals.html b/manuals.html index a961bb4..146edd2 100644 --- a/manuals.html +++ b/manuals.html @@ -596,7 +596,7 @@
An open-source software for neurosurgical trajectory planning, visualization, and postoperative assessment
What is trajectoryGuide?
trajectoryGuide provides the capability to plan surgical trajectories within 3D Slicer, an open-source medical imaging software. trajectoryGuide contains modules that span the three phases of neurosurgical trajectory planning:
"},{"location":"index.html#preoperative-features","title":"Preoperative features","text":"The main goal of image guidance in neurosurgery is to accurately project magnetic resonance imaging (MRI) and/or computed tomography (CT) data into the operative field for defining anatomical landmarks, pathological structures and margins oftumors. To achieve this, \"neuronavigation\" software solutions have been developed to provide precise spatial information to neurosurgeons. Safe and accurate navigation of brain anatomy is of great importance when attempting to avoid important structures such as arteries and nerves.
Neuronavigation software provides orientation information to the surgeon during all three phases of surgery: 1) pre-operative trajectory planning, 2) the intraoperative steroetactic procedure, and 3) post-operative visualization. Trajectory planning is performed prior to surgery using preoperative MRI data. On the day of surgery, the plans are transferred to sterotactic space using a frame or frame-less system. In both instances, a set of radiopaque fiducials are detected, providing the transformation matrix from anatomical space to sterotactic space. During the surgical procedure, the plans are updated according to intraoperative data collected (i.e. microelectrode recordings, electrode stimulation etc.). After the surgery, post-operative MRI or CT imaging confirms the actual position of the trajectory(ies).
"},{"location":"about.html#trajectoryguide","title":"trajectoryGuide?","text":"trajectoryGuide is a surgical planning, visuazliation, and postoperative assessment tool used for various trajectory surguries. It provides capabilities across the entire surgical spectrum:
"},{"location":"about.html#what-is-3d-slicer","title":"What is 3D Slicer?","text":"3D Slicer is an open-source software platform for medical image informatics, image processing, and 3D visualization. Built over two decades through support from the National Institutes of Health and a worldwide developer community, Slicer brings free, powerful cross-platform (Linux, MacOSX, and Windows) processing tools to physicians, researchers, and the general public.
"},{"location":"about.html#3d-slicer-features","title":"3D Slicer Features","text":"Multi-organ: from head to toe Support for multi-modality imaging including: MRI, CT, US, nuclear medicine, and microscopy Bidirectional interface for devices
"},{"location":"about.html#sources","title":"Sources","text":"3D Slicer as an Image Computing Platform for the Quantitative Imaging Network <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3466397/>
_. Magnetic Resonance Imaging. 2012 Nov;30(9):1323-41. PMID: 22770690.Install 3D Slicer Version 4.11.0 (or later) by downloading it from the 3D Slicer website.
"},{"location":"installation.html#trajectoryguide-source-code","title":"trajectoryGuide source code","text":"Download the trajectoryGuide source code from GitHub. Unzip the folder and store it somewhere on your system.
"},{"location":"installation.html#template-space-directory","title":"Template space directory","text":"Download the template space zip file from the most recent GitHub release. Unzip the folder and move it into the trajectoryGuide folder at the location resources/ext_libs/space
. h
You will first need to install a few Python libraries before loading trajectoryGuide. Click the Blue and Yellow Python button located in the top menu to the right.
Python interactor button.
The Python interactor should now be visible at the bottom of the 3D Slicer window.
3D Slicer Python interactor.
Copy and paste the command below into the Python interactor box, press Enter
to run the command.
pip_install('--upgrade pip')\n
Copy and paste the command below into the Python interactor box, press Enter
to run the command.
pip_install('scikit-image scikit-learn pandas scipy==1.5.4')\n
trajectoryGuide uses the Volume Reslice Driver module from SlicerIGT. To install this use the Extension Manager module within 3D Slicer or download the source code for your slicer version here (select Slicer version --> extensions --> SlicerIGT).
"},{"location":"installation.html#add-trajectoryguide-modules","title":"Add trajectoryGuide modules","text":"Note
You will only need to add the following modules: \u2003\u2003\u2611 dataImport \u2003\u2003\u2611 frameDetect \u2003\u2003\u2611 registration \u2003\u2003\u2611 anatomicalLandmarks \u2003\u2003\u2611 preopPlanning \u2003\u2003\u2611 intraopPlanning \u2003\u2003\u2611 postopProgramming \u2003\u2003\u2611 postopLocalization \u2003\u2003\u2611 dataView
In the top menu, click on the Edit
menu and select Application settings
3D Slicer Edit menu.
In the settings dialog window select Modules
, click the right-facing arrows next to the box with the text Additional module paths
and click Add
Navigate to where you stored the source code for trajectoryGuide, select each of the sub-folders listed in the Note box above and click Choose
. You will need to add each folder one-by-one.
3D Slicer add module path.
3D Slicer will want to restart at this point, click Yes
3D Slicer restart notification.
Now when 3D Slicer restarts, trajectoryGuide will be included in Slicer's modules menu.
"},{"location":"manuals.html","title":"Product Manuals","text":""},{"location":"manuals.html#electrophysiology","title":"Electrophysiology","text":""},{"location":"manuals.html#alpha-omega","title":"Alpha Omega","text":"
Warning
Ensure you are using 3D Slicer 4.11
"},{"location":"3dslicer/01_interface.html#interface-overview","title":"Interface Overview","text":"\u2003\u2003Slicer stores all loaded data in a data repository, called the \u201cscene\u201d (or Slicer scene or MRML scene). Each data set, such as an image volume, surface model, or point set, is represented in the scene as a \u201cnode\u201d.
\u2003\u2003Slicer provides a large number \u201cmodules\u201d, each implementing a specific set of functions for creating or manipulating data in the scene. Modules typically do not interact with each other directly: they just all operate on the data nodes in the scene. Slicer package contains over 100 built-in modules and additional modules can be installed by using the Extension Manager.
"},{"location":"3dslicer/01_interface.html#2d-views","title":"2D Views","text":"\u2003\u2003Three default slice views are provided (with Red, Yellow and Green colored bars) in which Axial, Saggital, Coronal or Oblique 2D slices of volume images can be displayed. Additional generic slice views have a grey colored bar and an identifying number in their upper left corner.
\u2003\u2003Slice View Controls: The colored bar across any Slice View shows a pushpin icon on its left. When the mouse rolls over this icon, a panel for configuring the slice view is displayed. The panel is hidden when the mouse moves away. For persistent display of this panel, just click the pushpin icon. For more options, click the double-arrow icon.
View Controllers module provides an alternate way of displaying these controllers in the Module Panel.
Displays a rendered 3D view of the scene along with visual references to specify orientation and scale. Default orientation axes:
3D View Controls: The blue bar across any 3D View shows a pushpin icon on its left. When the mouse rolls over this icon, a panel for configuring the 3D View is displayed. The panel is hidden when the mouse moves away. For persistent display of this panel, just click the pushpin icon.
"},{"location":"3dslicer/01_interface.html#mouse-keyboard-shortcuts","title":"Mouse & Keyboard Shortcuts","text":"The following summary of shortcuts is taken from the 3D Slicer documentation.
Note
The shortcuts are working on any stable 3D Slicer version >=4.10.0
"},{"location":"3dslicer/01_interface.html#generic-shortcuts","title":"Generic shortcuts","text":" Shortcut Operation Ctrl
+ f
find module by name (hit Enter
to select) Ctrl
+ a
add data from file Ctrl
+ o
add data from file Ctrl
+ s
save data to files Ctrl
+ w
close scene Ctrl
+ 0
show Error Log Ctrl
+ 1
show Application Help Ctrl
+ 2
show Application Settings Ctrl
+ 3
show/hide Python Interactor Ctrl
+ 4
show Extension Manager Ctrl
+ 5
show/hide Module Panel Ctrl
+ h
open default startup module (configurable in Application Settings)
\u2003\u2003The following shortcuts are available when a slice view is active. To activate a view, click inside the view: if you do not want to change anything in the view, just activate it then do right-click
without moving the mouse. Note that simply hovering over the mouse over a slice view will not activate the view.
Shortcut Operation right-click
+ drag up/down
zoom image in/out left-click
+ drag up/down
adjust level of image left-click
+ drag left/right
adjust window of image Ctrl
+ mouse wheel
zoom image in/out middle-click
+ drag
pan (translate) view Shift
+ left-click
+ drag
pan (translate) view left arrow
/ right arrow
move to previous/next slice b
/ f
move to previous/next slice Shift
+ mouse move
move crosshair in all views v
toggle slice visibility in 3D view r
reset zoom and pan to default g
toggle segmentation or labelmap volume t
toggle foreground volume visibility [
/ ]
use previous/next volume as background {
/ }
use previous/next volume as foreground
\u2003\u2003The following shortcuts are available when a 3D view is active. To activate a view, click inside the view: if you do not want to change anything in the view, just activate it then do right-click
without moving the mouse. Note that simply hovering over the mouse over a slice view will not activate the view.
Shortcut Operation Shift
+ mouse move
move crosshair in all views left-click
+ drag
rotate view left arrow
/ right arrow
rotate view up arrow
/ down arrow
rotate view End
or Keypad 1
rotate to view from anterior Shift
+ End
or Shift
+ Keypad 1
rotate to view from posterior Page Down
or Keypad 3
rotate to view from left side Shift
+ Page Down
or Shift
+ Keypad 3
rotate to view from right side Home
or Keypad 7
rotate to view from superior Shift
+ Home
or Shift
+ Keypad 7
rotate to view from inferior right-click
+ drag up/down
zoom view in/out Ctrl
+ mouse wheel
zoom view in/out +
/ -
zoom view in/out middle-click
+ drag
pan (translate) view Shift
+ left-click
+ drag
pan (translate) view Shift
+ left arrow
/ Shift
+ right arrow
pan (translate) view Shift
+ up arrow
/ Shift
+ down arrow
pan (translate) view Shift
+ Keypad 2
/ Shift
+ Keypad 4
pan (translate) view Shift
+ Keypad 6
/ Shift
+ Keypad 8
pan (translate) view Keypad 0
or Insert
reset zoom and pan, rotate to nearest standard view
Warning
Ensure you are using 3D Slicer 4.11
"},{"location":"3dslicer/02_saving_data.html#walkthrough","title":"Walkthrough","text":"Click on the File menu at the top.
3D Slicer file menu.
Choose Save, the dialog box shown below will appear:
3D Slicer file menu.
If this is your first time saving you will have to define the directory to save the files. Click on Change directory for selected files. The dialog box below will appear:
3D Slicer file menu.
Find the directory where you want to save the data, create a new folder called [VolumeID]_scene and then double click on it so you are now within the directory. Select Choose.
You will now notice that the .nii file is de-selected and is in the original directory location. All the other files you will be saving are in the newly created [VolumeID]_scene folder. Click Save.
3D Slicer file menu.
If this not your first time saving you will see two warning messages. The first will notify you that the .mrml file already exists and ask if you want to replace it. Click OK.
3D Slicer file menu.
A second warning message will appear letting you know that the .fcsv file already exists and ask if you would like to replace it. Click Yes to All. This will overwrite your old datafiles with the newer ones.
3D Slicer file menu.
To close the current scene, before opening a new subject, click the File menu and select Close Scene.
3D Slicer file menu.
"},{"location":"widgets/00_overview.html","title":"Overview","text":""},{"location":"widgets/00_overview.html#neuronavigation","title":"Neuronavigation","text":"
\u2003\u2003The main goal of image guidance in neurosurgery is to accurately project magnetic resonance imaging (MRI) and/or computed tomography (CT) data into the operative field for defining anatomical landmarks, pathological structures and margins of tumors. To achieve this, \"neuronavigation\" software solutions have been developed to provide precise spatial information to neurosurgeons. Safe and accurate navigation of brain anatomy is of great importance when attempting to avoid important structures such as arteries and nerves.
\u2003\u2003Neuronavigation software provides orientation information to the surgeon during all three phases of surgery: 1) pre-operative trajectory planning, 2) the intraoperative stereotactic procedure, and 3) post-operative visualization. Trajectory planning is performed prior to surgery using preoperative MRI data. On the day of surgery, the plans are transferred to stereotactic space using a frame or frame-less system. In both instances, a set of radiopaque fiducials are detected, providing the transformation matrix from anatomical space to stereotactic space. During the surgical procedure, the plans are updated according to intraoperative data collected (i.e. microelectrode recordings, electrode stimulation etc.). After the surgery, post-operative MRI or CT imaging confirms the actual position of the trajectory(ies).
"},{"location":"widgets/00_overview.html#trajectoryguide","title":"trajectoryGuide","text":"trajectoryGuide is an open-source software suite that provides the capability to plan neurosurgical trajectories within 3D Slicer. trajectoryGuide contains several modules that span the three phases of surgical intervention: pre-op, intra-op, and post-op.
"},{"location":"widgets/00_overview.html#prior-art","title":"Prior Art","text":""},{"location":"widgets/00_overview.html#open-source","title":"Open source","text":""},{"location":"widgets/00_overview.html#pydbs","title":"PyDBS","text":"Developed by Pierre Jannin. The software is not freely available, a description of the tool can be found in this publication.
"},{"location":"widgets/00_overview.html#tactics","title":"Tactics","text":"Developed by David G. Gobbi and Yves P.Starreveld. The software is available on GitHub.
"},{"location":"widgets/00_overview.html#ogles2","title":"Ogles2","text":"Developed by Johannes A. Koeppen and based on Ogle written by Dr. Michael J Gourlay. The software can be downloaded from Sourceforge or the Neuranse website.
"},{"location":"widgets/00_overview.html#commercial-software","title":"Commercial software","text":""},{"location":"widgets/00_overview.html#stealthstation-s8","title":"StealthStation S8","text":"Developed by Medtronic. More information can be found on their website.
"},{"location":"widgets/00_overview.html#neuroinspire","title":"Neuroinspire","text":"Developed by Renishaw. More information can be found on their website.
"},{"location":"widgets/00_overview.html#elements","title":"Elements","text":"Developed by Brainlab. More information can be found on their website.
"},{"location":"widgets/00_overview.html#inomed-planning-system","title":"Inomed Planning System","text":"Developed by Inomed. More information can be found on their website.
"},{"location":"widgets/00_overview.html#waypoint-navigator","title":"WayPoint Navigator","text":"Distributed by FHC. More information can be found on their website.
"},{"location":"widgets/00_overview.html#mnps","title":"MNPS","text":"Developed by Mevis. More information can be found on their website.
"},{"location":"widgets/00_overview.html#neurosight","title":"NeuroSight","text":"Developed by Integra. More information can be found on their website.
"},{"location":"widgets/00_overview.html#claronav-navient","title":"ClaroNav Navient","text":"Developed by ClaroNav. More information can be found on their website.
"},{"location":"widgets/01_data_import.html","title":"Data Import","text":"
Warning
If you have not installed trajectoryGuide into 3D Slicer please follow the installation instructions.
\u2003\u2003The first module in trajectoryGuide handles the import of patient imaging data. The data should be within a single directory, this directory will be selected within the import window (do not select the files within the directory). During the initial data import for a patient, trajectoryGuide will store a copy of the imaging data into a source
directory as a backup, these files will remain unchanged.
trajectoryGuide stores all imaging data in NIFTI (Neuroinformatics Technology Initiative) format, with the file extension .nii.gz
. If the original imaging data is in DICOM format, the files will first need to be converted to NIFTI according to BIDS (see section above).
trajectoryGuide requires the input data folder to be organized according to Brain Imaging Data Structure (BIDS). The following is an example input directory.
bids/\n \u251c\u2500\u2500 dataset_description.json\n \u2514\u2500\u2500 sub-<subject_label>/\n \u2514\u2500\u2500 ses-<ses_label>/\n \u251c\u2500\u2500 anat/\n \u2502 \u251c\u2500\u2500 sub-<subject_label>_ses-<ses_label>_T1w.nii.gz\n \u2502 \u251c\u2500\u2500 sub-<subject_label>_ses-<ses_label>_PD.nii.gz\n \u2502 \u2514\u2500\u2500 sub-<subject_label>_ses-<ses_label>_acq-Tra_T2w.nii.gz\n \u2514\u2500\u2500 ct/\n \u2514\u2500\u2500 sub-<subject_label>_ses-<ses_label>_acq-Frame_ct.nii.gz\n
"},{"location":"widgets/01_data_import.html#output-directory-structure","title":"Output directory structure","text":"trajectoryGuide trajectoryGuide stores processed data within the derivatives directoy of the BIDS dataset.
bids/\n \u251c\u2500\u2500 dataset_description.json\n \u2514\u2500\u2500 sub-<subject_label>/...\nderivatives/\n \u2514\u2500\u2500 trajectoryGuide/\n \u2514\u2500\u2500 sub-<subject_label>/\n \u251c\u2500\u2500 sub-<subject_label>_surgical_data.json\n \u251c\u2500\u2500 sub-<subject_label>_T1w.nii.gz\n \u251c\u2500\u2500 sub-<subject_label>_T1w.json\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_acq-Tra_T2w.nii.gz\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_acq-Tra_T2w.json\n \u251c\u2500\u2500 sub-<subject_label>_desc-rigid_from-TraT2w_to-T1w_xfm.h5\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_PD.nii.gz\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_PD.json\n \u251c\u2500\u2500 sub-<subject_label>_desc-rigid_from-PD_to-T1w_xfm.h5\n \u251c\u2500\u2500 sub-<subject_label>_ses-pre_coordsystem.json\n \u251c\u2500\u2500 settings/...\n \u251c\u2500\u2500 source/...\n \u251c\u2500\u2500 space/...\n \u2514\u2500\u2500 summaries/...\n
\u2003\u2003Each image volume has an associated .json
file that stores metadata associated with the volume as it progresses through the trajectoryGuide workflow. Select the import options prior to loading the patient directory.
Data import module in trajectoryGuide.
"},{"location":"widgets/04_frame_detection.html","title":"Frame Detection","text":"
Note
To navigate through the 2D view: Move crosshairs in all views: hold Shift
while moving the mouse Zoom in/out: hold the right
mouse button while moving mouse up/down (can hold Control/Command
and scroll) Pan (translate) scan: hold middle-mouse button
while moving the mouse
\u2003\u2003Automatic frame detection will work for CT and MRI. From the drop-down menu, next to Fiducial Volume
, select the volume containing the stereotactic frame. Choose the stereotactic frame that is captured in the CT volume and press Detect Frame Fiducials
.
Frame detection widget interface.
If the automatic detection was successful you will see an image like this:
Frame fiducials with frame registration errors.
\u2003\u2003Scroll up/down the slices to check the accuracy of the frame detection. The displayed numbers are the fiducial registration errors, lower values indicate a more accurate registration. Values lower than 0.5mm appear in Green, while values above 0.5mm appear in Red. On the left-hand side you will see the overall frame registration error (anything below 0.5 mm should be acceptable).
\u2003\u2003If you are satisfied with the results, select Confirm Frame Fiducials. If you are not satisfied, you can try adjusting frame registration settings and re-run autodection (see ensuing section).
"},{"location":"widgets/04_frame_detection.html#adjust-frame-registration-settings","title":"Adjust frame registration settings","text":"You can modify the default frame registration settings by clicking the Advanced Settings box in the frame detection widget.
Frame detection advanced settings.
\u2003\u2003The first parameter to adjust would be Match Centroids, select Yes. A pop-up message will appear asking if you want to overwrite the previous frame registration data, select Yes:
Frame detection pop-up message.
\u2003\u2003If you are still not happy with the registration, try increasing the number of iterations to 200 and re-run. The other parameter you can adjust is the number of iterations, increasing the value to 300. You may also want to try decreasing the radius of the target by 0.1 mm. The last choice would be to adjust the transform type, however this will introduce some non-linearity into the registration.
"},{"location":"widgets/04_frame_detection.html#manual-frame-detection","title":"Manual frame detection","text":"\u2003\u2003To run manual frame detection select the button Manual Detection. You will need to identify each frame fiducial one-by-one on the same axial slice. If you are unsure of how the stereotactic frame fiducials are numbered you can press Frame Fiducial Legend
to see the mapping. All point fiducials will need to be placed on the same axial slice. When you are finished, press Confirm Frame Fiducials.
Manual frame detection fiducials.
"},{"location":"widgets/04_frame_detection.html#supported-frame-systems","title":"Supported Frame Systems","text":""},{"location":"widgets/04_frame_detection.html#leksell-frame-localizer","title":"Leksell frame localizer","text":"Leksell stereotactic system.
"},{"location":"widgets/04_frame_detection.html#brw-frame-localizer","title":"BRW frame localizer","text":"BRW stereotactic system.
"},{"location":"widgets/04_frame_detection.html#crw-frame","title":"CRW frame","text":"CRW stereotactic system.
"},{"location":"widgets/04_frame_detection.html#automatic-frame-detection-algorithm","title":"Automatic frame detection algorithm","text":"\u2003\u2003The automatic frame detection algorithm first employs an intensity threshold to identify pixel clusters that may belong to an N-localizer. After the entire image volume is scanned, the identified clusters are either accepted or rejected based on the stereotactic frame geometry. The intensity threshold is a binary threshold that results in pixel values less than a specified intensity value to be removed (i.e. brain tissue will be removed). For the Leksell and CRW frame systems, a morphological erosion step is applied to the threshold image followed by a morphological dilation. Erosion of a binary image sharpens the boundaries of foreground pixels, which will act to make features in the image volume \u201cthinner\u201d. The resulting image will contain frame and skull artifact but the frame fiducial markers will no longer be present. To finalize the image mask, a morphological dilation step is applied to the eroded image to enlarge the remaining structures in the image. Since the frame fiducial markers were removed with the erosion step, the objects being enlarged are left-over artifact to be removed - which forms the final image mask. The image mask is inverted and intersected with the threshold image to recover the fiducial markers.
\u2003\u2003The resulting masked image is processed further to obtain connected components (i.e. neighbouring pixels that share the same value). As long as neighbouring pixels share the same value, they will be labelled as a single region. All connected regions are assigned the same integer value to form clusters of connected pixels. Since the dimensions of the N-localizer are known the expected pixel cluster size can be estimated.
"},{"location":"widgets/04_frame_detection.html#leksell-localization-sample","title":"Leksell localization sample","text":""},{"location":"widgets/04_frame_detection.html#brw-localization-sample","title":"BRW localization sample","text":""},{"location":"widgets/04_frame_detection.html#crw-localization-sample","title":"CRW localization sample","text":""},{"location":"widgets/05_registration.html","title":"Registration","text":""},{"location":"widgets/05_registration.html#patient-space-registration","title":"Patient Space Registration","text":"\u2003\u2003All registrations with patient scans will be rigid registrations. With some of the more advanced algorithms you can override this and run non-linear registration but it is strongly discouraged. See the below Algorithms to learn more about each algorithm and the respective settings.
Patient space registration settings.
"},{"location":"widgets/05_registration.html#registration-settings","title":"Registration Settings","text":"\u2003\u2003Within the Reference Volume drop-down box, select the scan you want to co-register all other scans to (Reference). In the Frame Volume drop-down box, select the scan that contains the stereotactic fiducials. In the Floating Volumes drop-down box, all other scans (floating) will be checked to indicate they will be registered to the reference. If there are any floating scans you do not want registered, uncheck them.
Patient space drop-down volume boxes.
\u2003\u2003To begin the registration, press the Run Registration button. The Registration Process box will display updated information during the registration. When the registration is complete, the view will be automatically changed to a compare view.
Patient space drop-down volume boxes.
"},{"location":"widgets/05_registration.html#check-registration-results","title":"Check Registration Results","text":"\u2003\u2003For each registration you will either select Confirm Registration or Decline Registration. If you choose to decline a registration, the registration can be re-run with a different algorithm. To check the registration, it is helpful to use the opacity slider to change the opacity of the foreground scan (floating scan).
Patient space drop-down volume boxes.
\u2003\u2003You can also use the Layer Reveal tool to check the registration in more detail. This tool displays a square that contains half the foreground scan and half the background scan.
Registration layer reveal tool.
\u2003\u2003When finished checking the registrations, any confirmed registered scans will disappear from the Floating Volumes drop-down box, declined scans will still appear in the drop-down box. To re-run the registration, update any settings and press the Run Registration button, all previous registration information, for the current floating scans, will be erased.
"},{"location":"widgets/05_registration.html#algorithms","title":"Algorithms","text":"\u2003\u2003The default algorithm will be NiftyReg using nearest neighbor interpolation when applying the transform. You are able to change the registration algorithm and parameters according to the following information.
"},{"location":"widgets/05_registration.html#niftyreg-reg_aladin","title":"NiftyReg - reg_aladin","text":"For information about this algorithm you can visit this page.
For information about this algorithm you can visit this page.
\u2003\u2003For information about this algorithm you can visit this page. This algorithm gives the user more control over each step. The user can specify the \"stages\" of registration, where a stage consists of a transform and an image metric. Each stage consists of levels with specific values set for iterations, shrink factors, and smoothing sigmas.
\u2003\u2003Click Run Registration
. The registration progression will be updated within the Registration Progress
window. Once registration is completed, you will see the co-registered volumes appear in the floating drop-down box (under Co-registered Volumes
). You will now confirm that the registration results by clicking the Compare Volumes
button. For each registration you will either select Confirm Registration
or Decline Registration
. If you choose to decline a registration, you will be able to re-run the registration with a different algorithm.
"},{"location":"widgets/06_anatomical_fiducials.html","title":"Registration","text":"
Note
To navigate through the 2D view: Move crosshairs in all views: hold Shift
while moving the mouse Zoom in/out: hold the right
mouse button while moving mouse up/down (can hold Control/Command
and scroll) Pan (translate) scan: hold middle-mouse button
while moving the mouse
\u2003\u2003The midline plane will need to be determined, which relies on four points: the anterior commissure (AC), the posterior commissure (PC), and two midline points (Mid 1-2). The midline points should be at least one interhemispheric point and one brainstem point (see section below for landmark positions). These 4 points are then used to define the midline plane, which is used to define the Talaraich coordinate system.
\u2003\u2003To place a fiducial point, click on the place point button () and drop the point at the indicated landmark. Once you have placed AC, PC and at least 2 midlines, click Confirm Fiducials. A new entry will be added to the fiducial table for the point MCP
.
\u2192
Warning
If you modify any fiducial points you will need to press Confirm Fiducials again to re-calculate MCP
The anterior commissure.
"},{"location":"widgets/06_anatomical_fiducials.html#pc-point","title":"PC point","text":"The posterior commissure.
"},{"location":"widgets/06_anatomical_fiducials.html#midline-points","title":"Midline Points","text":""},{"location":"widgets/06_anatomical_fiducials.html#genu","title":"Genu","text":"Genu.
"},{"location":"widgets/06_anatomical_fiducials.html#infracollicular-sulcus","title":"Infracollicular Sulcus","text":"The infracollicular sulcus.
"},{"location":"widgets/06_anatomical_fiducials.html#superior-interpeduncular-fossa","title":"Superior interpeduncular fossa","text":"The superior interpeduncular fossa.
"},{"location":"widgets/07_preoperative_planning.html","title":"Preoperative Planning","text":"
Note
To navigate through the 2D view: Move crosshairs in all views: hold Shift
while moving the mouse Zoom in/out: hold the right
mouse button while moving mouse up/down (can hold Control/Command
and scroll) Pan (translate) scan: hold middle-mouse button
while moving the mouse
\u2003\u2003The planning module contains two coordinate groupboxes, one for ACPC space and the other for stereotactic space. The coordinates are linked to the position of the crosshairs, when the crosshairs move the coordinates are updated in real-time.
The preop planning module.
trajectoryGuide links objects and data in the scene to the plan name. When you switch between plans, the values in the coordinate boxes will update to the current plan values. Before setting a target/entry, you will need to add a new plan. Click Add
in the Plan Name groupbox and set a name for the current plan. Defining the entry/target point is similar to the previous Anatomical fiducials widget except you can also enter exact coordinate values into the ACPC/Stereotactic space coordinate boxes and press \"Update Crosshairs\" to move the crosshairs to the specified coordinates.
Warning
For now, electrode localization in trajectoryGuide is achieved by manually defining the bottom and top of each electrode. The user should place the postoperative imaging volume (CT and/or MRI) inside the patient folder, re-load trajectoryGuide, and run the registration step. Once the postoperative volume is aligned with the reference image volume it can be used to locate the electrode(s).
\u2003\u2003The user must indicate the plan that is associated with the electrode being localized. If only postoperative localization is being performed for the patient a plan name can be defined within this module. The user is then asked to place a fiducial marker at the very tip of the electrode to mark the \u201ctarget\u201d and another fiducial point near where it exists from the skull to indicate the \u201centry\u201d point. Once the two points are placed the \u201cConfirm Electrode\u201d button can be pressed and the postoperative electrode will be rendered in the 2D and 3D views.
The postop localization module.
"},{"location":"widgets/10_postoperative_programming.html","title":"Volume of activated tissue","text":"The volume of activated tissue (VAT) also referred to as the volume of tissue activated (VTA) is a model the predicts the extent and location of neural activation produced by stimulation. In trajectoryGuide, the model proposed by Dembek et al. (2017) has been utilized since it does not require image processing to be conducted prior to VAT computation 1. The calculation of the stimulation field radius based on the DBS stimulation parameters is described the following equation:
r=\\left ( \\frac{pw}{90 \\mu s} \\right )*\\sqrt{0.72\\frac{Vm}{A}*\\frac{I}{165 V/m}}\u2003\u2003where pw is the pulse width (microseconds), A is the amplitude of stimulation (voltage or milliamperes), and I is the impedance (Ohms) of the electrode contact being used for stimulation. The value 0.72 Vm is a constant validated by Dembek et al. (2017) in a previous study 1. A more complex stimulation field model will eventually be incorporated into trajectoryGuide that can handle bipolar stimulation. For now, only a monopolar stimulation model can be generated. To generate the VAT models in trajectoryGuide, the user needs to input the stimulation parameters including the active contact, stimulation amplitude, frequency, pulse width, and impedance
The postop volume of activated tissue module.
1 T. A. Dembek et al., \u201cProbabilistic mapping of deep brain stimulation effects in essential tremor.,\u201d NeuroImage Clin., vol. 13, pp. 164\u2013173, 2017, doi: 10.1016/j.nicl.2016.11.019
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["trimmer","stopWordFilter"]},"docs":[{"location":"index.html","title":"Home","text":"An open-source software for neurosurgical trajectory planning, visualization, and postoperative assessment
What is trajectoryGuide?
trajectoryGuide provides the capability to plan surgical trajectories within 3D Slicer, an open-source medical imaging software. trajectoryGuide contains modules that span the three phases of neurosurgical trajectory planning:
"},{"location":"index.html#preoperative-features","title":"Preoperative features","text":"The main goal of image guidance in neurosurgery is to accurately project magnetic resonance imaging (MRI) and/or computed tomography (CT) data into the operative field for defining anatomical landmarks, pathological structures and margins oftumors. To achieve this, \"neuronavigation\" software solutions have been developed to provide precise spatial information to neurosurgeons. Safe and accurate navigation of brain anatomy is of great importance when attempting to avoid important structures such as arteries and nerves.
Neuronavigation software provides orientation information to the surgeon during all three phases of surgery: 1) pre-operative trajectory planning, 2) the intraoperative steroetactic procedure, and 3) post-operative visualization. Trajectory planning is performed prior to surgery using preoperative MRI data. On the day of surgery, the plans are transferred to sterotactic space using a frame or frame-less system. In both instances, a set of radiopaque fiducials are detected, providing the transformation matrix from anatomical space to sterotactic space. During the surgical procedure, the plans are updated according to intraoperative data collected (i.e. microelectrode recordings, electrode stimulation etc.). After the surgery, post-operative MRI or CT imaging confirms the actual position of the trajectory(ies).
"},{"location":"about.html#trajectoryguide","title":"trajectoryGuide?","text":"trajectoryGuide is a surgical planning, visuazliation, and postoperative assessment tool used for various trajectory surguries. It provides capabilities across the entire surgical spectrum:
"},{"location":"about.html#what-is-3d-slicer","title":"What is 3D Slicer?","text":"3D Slicer is an open-source software platform for medical image informatics, image processing, and 3D visualization. Built over two decades through support from the National Institutes of Health and a worldwide developer community, Slicer brings free, powerful cross-platform (Linux, MacOSX, and Windows) processing tools to physicians, researchers, and the general public.
"},{"location":"about.html#3d-slicer-features","title":"3D Slicer Features","text":"Multi-organ: from head to toe Support for multi-modality imaging including: MRI, CT, US, nuclear medicine, and microscopy Bidirectional interface for devices
"},{"location":"about.html#sources","title":"Sources","text":"3D Slicer as an Image Computing Platform for the Quantitative Imaging Network <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3466397/>
_. Magnetic Resonance Imaging. 2012 Nov;30(9):1323-41. PMID: 22770690.Install 3D Slicer Version 4.11.0 (or later) by downloading it from the 3D Slicer website.
"},{"location":"installation.html#trajectoryguide-source-code","title":"trajectoryGuide source code","text":"Download the trajectoryGuide source code from GitHub. Unzip the folder and store it somewhere on your system.
"},{"location":"installation.html#template-space-directory","title":"Template space directory","text":"Download the template space zip file from the most recent GitHub release. Unzip the folder and move it into the trajectoryGuide folder at the location resources/ext_libs/space
. h
You will first need to install a few Python libraries before loading trajectoryGuide. Click the Blue and Yellow Python button located in the top menu to the right.
Python interactor button.
The Python interactor should now be visible at the bottom of the 3D Slicer window.
3D Slicer Python interactor.
Copy and paste the command below into the Python interactor box, press Enter
to run the command.
pip_install('--upgrade pip')\n
Copy and paste the command below into the Python interactor box, press Enter
to run the command.
pip_install('scikit-image scikit-learn pandas scipy==1.5.4')\n
trajectoryGuide uses the Volume Reslice Driver module from SlicerIGT. To install this use the Extension Manager module within 3D Slicer or download the source code for your slicer version here (select Slicer version --> extensions --> SlicerIGT).
"},{"location":"installation.html#add-trajectoryguide-modules","title":"Add trajectoryGuide modules","text":"Note
You will only need to add the following modules: \u2003\u2003\u2611 dataImport \u2003\u2003\u2611 frameDetect \u2003\u2003\u2611 registration \u2003\u2003\u2611 anatomicalLandmarks \u2003\u2003\u2611 preopPlanning \u2003\u2003\u2611 intraopPlanning \u2003\u2003\u2611 postopProgramming \u2003\u2003\u2611 postopLocalization \u2003\u2003\u2611 dataView
In the top menu, click on the Edit
menu and select Application settings
3D Slicer Edit menu.
In the settings dialog window select Modules
, click the right-facing arrows next to the box with the text Additional module paths
and click Add
Navigate to where you stored the source code for trajectoryGuide, select each of the sub-folders listed in the Note box above and click Choose
. You will need to add each folder one-by-one.
3D Slicer add module path.
3D Slicer will want to restart at this point, click Yes
3D Slicer restart notification.
Now when 3D Slicer restarts, trajectoryGuide will be included in Slicer's modules menu.
"},{"location":"manuals.html","title":"Product Manuals","text":""},{"location":"manuals.html#electrophysiology","title":"Electrophysiology","text":""},{"location":"manuals.html#alpha-omega","title":"Alpha Omega","text":"
Warning
Ensure you are using 3D Slicer 4.11
"},{"location":"3dslicer/01_interface.html#interface-overview","title":"Interface Overview","text":"\u2003\u2003Slicer stores all loaded data in a data repository, called the \u201cscene\u201d (or Slicer scene or MRML scene). Each data set, such as an image volume, surface model, or point set, is represented in the scene as a \u201cnode\u201d.
\u2003\u2003Slicer provides a large number \u201cmodules\u201d, each implementing a specific set of functions for creating or manipulating data in the scene. Modules typically do not interact with each other directly: they just all operate on the data nodes in the scene. Slicer package contains over 100 built-in modules and additional modules can be installed by using the Extension Manager.
"},{"location":"3dslicer/01_interface.html#2d-views","title":"2D Views","text":"\u2003\u2003Three default slice views are provided (with Red, Yellow and Green colored bars) in which Axial, Saggital, Coronal or Oblique 2D slices of volume images can be displayed. Additional generic slice views have a grey colored bar and an identifying number in their upper left corner.
\u2003\u2003Slice View Controls: The colored bar across any Slice View shows a pushpin icon on its left. When the mouse rolls over this icon, a panel for configuring the slice view is displayed. The panel is hidden when the mouse moves away. For persistent display of this panel, just click the pushpin icon. For more options, click the double-arrow icon.
View Controllers module provides an alternate way of displaying these controllers in the Module Panel.
Displays a rendered 3D view of the scene along with visual references to specify orientation and scale. Default orientation axes:
3D View Controls: The blue bar across any 3D View shows a pushpin icon on its left. When the mouse rolls over this icon, a panel for configuring the 3D View is displayed. The panel is hidden when the mouse moves away. For persistent display of this panel, just click the pushpin icon.
"},{"location":"3dslicer/01_interface.html#mouse-keyboard-shortcuts","title":"Mouse & Keyboard Shortcuts","text":"The following summary of shortcuts is taken from the 3D Slicer documentation.
Note
The shortcuts are working on any stable 3D Slicer version >=4.10.0
"},{"location":"3dslicer/01_interface.html#generic-shortcuts","title":"Generic shortcuts","text":" Shortcut Operation Ctrl
+ f
find module by name (hit Enter
to select) Ctrl
+ a
add data from file Ctrl
+ o
add data from file Ctrl
+ s
save data to files Ctrl
+ w
close scene Ctrl
+ 0
show Error Log Ctrl
+ 1
show Application Help Ctrl
+ 2
show Application Settings Ctrl
+ 3
show/hide Python Interactor Ctrl
+ 4
show Extension Manager Ctrl
+ 5
show/hide Module Panel Ctrl
+ h
open default startup module (configurable in Application Settings)
\u2003\u2003The following shortcuts are available when a slice view is active. To activate a view, click inside the view: if you do not want to change anything in the view, just activate it then do right-click
without moving the mouse. Note that simply hovering over the mouse over a slice view will not activate the view.
Shortcut Operation right-click
+ drag up/down
zoom image in/out left-click
+ drag up/down
adjust level of image left-click
+ drag left/right
adjust window of image Ctrl
+ mouse wheel
zoom image in/out middle-click
+ drag
pan (translate) view Shift
+ left-click
+ drag
pan (translate) view left arrow
/ right arrow
move to previous/next slice b
/ f
move to previous/next slice Shift
+ mouse move
move crosshair in all views v
toggle slice visibility in 3D view r
reset zoom and pan to default g
toggle segmentation or labelmap volume t
toggle foreground volume visibility [
/ ]
use previous/next volume as background {
/ }
use previous/next volume as foreground
\u2003\u2003The following shortcuts are available when a 3D view is active. To activate a view, click inside the view: if you do not want to change anything in the view, just activate it then do right-click
without moving the mouse. Note that simply hovering over the mouse over a slice view will not activate the view.
Shortcut Operation Shift
+ mouse move
move crosshair in all views left-click
+ drag
rotate view left arrow
/ right arrow
rotate view up arrow
/ down arrow
rotate view End
or Keypad 1
rotate to view from anterior Shift
+ End
or Shift
+ Keypad 1
rotate to view from posterior Page Down
or Keypad 3
rotate to view from left side Shift
+ Page Down
or Shift
+ Keypad 3
rotate to view from right side Home
or Keypad 7
rotate to view from superior Shift
+ Home
or Shift
+ Keypad 7
rotate to view from inferior right-click
+ drag up/down
zoom view in/out Ctrl
+ mouse wheel
zoom view in/out +
/ -
zoom view in/out middle-click
+ drag
pan (translate) view Shift
+ left-click
+ drag
pan (translate) view Shift
+ left arrow
/ Shift
+ right arrow
pan (translate) view Shift
+ up arrow
/ Shift
+ down arrow
pan (translate) view Shift
+ Keypad 2
/ Shift
+ Keypad 4
pan (translate) view Shift
+ Keypad 6
/ Shift
+ Keypad 8
pan (translate) view Keypad 0
or Insert
reset zoom and pan, rotate to nearest standard view
Warning
Ensure you are using 3D Slicer 4.11
"},{"location":"3dslicer/02_saving_data.html#walkthrough","title":"Walkthrough","text":"Click on the File menu at the top.
3D Slicer file menu.
Choose Save, the dialog box shown below will appear:
3D Slicer file menu.
If this is your first time saving you will have to define the directory to save the files. Click on Change directory for selected files. The dialog box below will appear:
3D Slicer file menu.
Find the directory where you want to save the data, create a new folder called [VolumeID]_scene and then double click on it so you are now within the directory. Select Choose.
You will now notice that the .nii file is de-selected and is in the original directory location. All the other files you will be saving are in the newly created [VolumeID]_scene folder. Click Save.
3D Slicer file menu.
If this not your first time saving you will see two warning messages. The first will notify you that the .mrml file already exists and ask if you want to replace it. Click OK.
3D Slicer file menu.
A second warning message will appear letting you know that the .fcsv file already exists and ask if you would like to replace it. Click Yes to All. This will overwrite your old datafiles with the newer ones.
3D Slicer file menu.
To close the current scene, before opening a new subject, click the File menu and select Close Scene.
3D Slicer file menu.
"},{"location":"widgets/00_overview.html","title":"Overview","text":""},{"location":"widgets/00_overview.html#neuronavigation","title":"Neuronavigation","text":"
\u2003\u2003The main goal of image guidance in neurosurgery is to accurately project magnetic resonance imaging (MRI) and/or computed tomography (CT) data into the operative field for defining anatomical landmarks, pathological structures and margins of tumors. To achieve this, \"neuronavigation\" software solutions have been developed to provide precise spatial information to neurosurgeons. Safe and accurate navigation of brain anatomy is of great importance when attempting to avoid important structures such as arteries and nerves.
\u2003\u2003Neuronavigation software provides orientation information to the surgeon during all three phases of surgery: 1) pre-operative trajectory planning, 2) the intraoperative stereotactic procedure, and 3) post-operative visualization. Trajectory planning is performed prior to surgery using preoperative MRI data. On the day of surgery, the plans are transferred to stereotactic space using a frame or frame-less system. In both instances, a set of radiopaque fiducials are detected, providing the transformation matrix from anatomical space to stereotactic space. During the surgical procedure, the plans are updated according to intraoperative data collected (i.e. microelectrode recordings, electrode stimulation etc.). After the surgery, post-operative MRI or CT imaging confirms the actual position of the trajectory(ies).
"},{"location":"widgets/00_overview.html#trajectoryguide","title":"trajectoryGuide","text":"trajectoryGuide is an open-source software suite that provides the capability to plan neurosurgical trajectories within 3D Slicer. trajectoryGuide contains several modules that span the three phases of surgical intervention: pre-op, intra-op, and post-op.
"},{"location":"widgets/00_overview.html#prior-art","title":"Prior Art","text":""},{"location":"widgets/00_overview.html#open-source","title":"Open source","text":""},{"location":"widgets/00_overview.html#pydbs","title":"PyDBS","text":"Developed by Pierre Jannin. The software is not freely available, a description of the tool can be found in this publication.
"},{"location":"widgets/00_overview.html#tactics","title":"Tactics","text":"Developed by David G. Gobbi and Yves P.Starreveld. The software is available on GitHub.
"},{"location":"widgets/00_overview.html#ogles2","title":"Ogles2","text":"Developed by Johannes A. Koeppen and based on Ogle written by Dr. Michael J Gourlay. The software can be downloaded from Sourceforge or the Neuranse website.
"},{"location":"widgets/00_overview.html#commercial-software","title":"Commercial software","text":""},{"location":"widgets/00_overview.html#stealthstation-s8","title":"StealthStation S8","text":"Developed by Medtronic. More information can be found on their website.
"},{"location":"widgets/00_overview.html#neuroinspire","title":"Neuroinspire","text":"Developed by Renishaw. More information can be found on their website.
"},{"location":"widgets/00_overview.html#elements","title":"Elements","text":"Developed by Brainlab. More information can be found on their website.
"},{"location":"widgets/00_overview.html#inomed-planning-system","title":"Inomed Planning System","text":"Developed by Inomed. More information can be found on their website.
"},{"location":"widgets/00_overview.html#waypoint-navigator","title":"WayPoint Navigator","text":"Distributed by FHC. More information can be found on their website.
"},{"location":"widgets/00_overview.html#mnps","title":"MNPS","text":"Developed by Mevis. More information can be found on their website.
"},{"location":"widgets/00_overview.html#neurosight","title":"NeuroSight","text":"Developed by Integra. More information can be found on their website.
"},{"location":"widgets/00_overview.html#claronav-navient","title":"ClaroNav Navient","text":"Developed by ClaroNav. More information can be found on their website.
"},{"location":"widgets/01_data_import.html","title":"Data Import","text":"
Warning
If you have not installed trajectoryGuide into 3D Slicer please follow the installation instructions.
\u2003\u2003The first module in trajectoryGuide handles the import of patient imaging data. The data should be within a single directory, this directory will be selected within the import window (do not select the files within the directory). During the initial data import for a patient, trajectoryGuide will store a copy of the imaging data into a source
directory as a backup, these files will remain unchanged.
trajectoryGuide stores all imaging data in NIFTI (Neuroinformatics Technology Initiative) format, with the file extension .nii.gz
. If the original imaging data is in DICOM format, the files will first need to be converted to NIFTI according to BIDS (see section above).
trajectoryGuide requires the input data folder to be organized according to Brain Imaging Data Structure (BIDS). The following is an example input directory.
bids/\n \u251c\u2500\u2500 dataset_description.json\n \u2514\u2500\u2500 sub-<subject_label>/\n \u2514\u2500\u2500 ses-<ses_label>/\n \u251c\u2500\u2500 anat/\n \u2502 \u251c\u2500\u2500 sub-<subject_label>_ses-<ses_label>_T1w.nii.gz\n \u2502 \u251c\u2500\u2500 sub-<subject_label>_ses-<ses_label>_PD.nii.gz\n \u2502 \u2514\u2500\u2500 sub-<subject_label>_ses-<ses_label>_acq-Tra_T2w.nii.gz\n \u2514\u2500\u2500 ct/\n \u2514\u2500\u2500 sub-<subject_label>_ses-<ses_label>_acq-Frame_ct.nii.gz\n
"},{"location":"widgets/01_data_import.html#output-directory-structure","title":"Output directory structure","text":"trajectoryGuide trajectoryGuide stores processed data within the derivatives directoy of the BIDS dataset.
bids/\n \u251c\u2500\u2500 dataset_description.json\n \u2514\u2500\u2500 sub-<subject_label>/...\nderivatives/\n \u2514\u2500\u2500 trajectoryGuide/\n \u2514\u2500\u2500 sub-<subject_label>/\n \u251c\u2500\u2500 sub-<subject_label>_surgical_data.json\n \u251c\u2500\u2500 sub-<subject_label>_T1w.nii.gz\n \u251c\u2500\u2500 sub-<subject_label>_T1w.json\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_acq-Tra_T2w.nii.gz\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_acq-Tra_T2w.json\n \u251c\u2500\u2500 sub-<subject_label>_desc-rigid_from-TraT2w_to-T1w_xfm.h5\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_PD.nii.gz\n \u251c\u2500\u2500 sub-<subject_label>_space-T1w_PD.json\n \u251c\u2500\u2500 sub-<subject_label>_desc-rigid_from-PD_to-T1w_xfm.h5\n \u251c\u2500\u2500 sub-<subject_label>_ses-pre_coordsystem.json\n \u251c\u2500\u2500 settings/...\n \u251c\u2500\u2500 source/...\n \u251c\u2500\u2500 space/...\n \u2514\u2500\u2500 summaries/...\n
\u2003\u2003Each image volume has an associated .json
file that stores metadata associated with the volume as it progresses through the trajectoryGuide workflow. Select the import options prior to loading the patient directory.
Data import module in trajectoryGuide.
"},{"location":"widgets/04_frame_detection.html","title":"Frame Detection","text":"
Note
To navigate through the 2D view: Move crosshairs in all views: hold Shift
while moving the mouse Zoom in/out: hold the right
mouse button while moving mouse up/down (can hold Control/Command
and scroll) Pan (translate) scan: hold middle-mouse button
while moving the mouse
\u2003\u2003Automatic frame detection will work for CT and MRI. From the drop-down menu, next to Fiducial Volume
, select the volume containing the stereotactic frame. Choose the stereotactic frame that is captured in the CT volume and press Detect Frame Fiducials
.
Frame detection widget interface.
If the automatic detection was successful you will see an image like this:
Frame fiducials with frame registration errors.
\u2003\u2003Scroll up/down the slices to check the accuracy of the frame detection. The displayed numbers are the fiducial registration errors, lower values indicate a more accurate registration. Values lower than 0.5mm appear in Green, while values above 0.5mm appear in Red. On the left-hand side you will see the overall frame registration error (anything below 0.5 mm should be acceptable).
\u2003\u2003If you are satisfied with the results, select Confirm Frame Fiducials. If you are not satisfied, you can try adjusting frame registration settings and re-run autodection (see ensuing section).
"},{"location":"widgets/04_frame_detection.html#adjust-frame-registration-settings","title":"Adjust frame registration settings","text":"You can modify the default frame registration settings by clicking the Advanced Settings box in the frame detection widget.
Frame detection advanced settings.
\u2003\u2003The first parameter to adjust would be Match Centroids, select Yes. A pop-up message will appear asking if you want to overwrite the previous frame registration data, select Yes:
Frame detection pop-up message.
\u2003\u2003If you are still not happy with the registration, try increasing the number of iterations to 200 and re-run. The other parameter you can adjust is the number of iterations, increasing the value to 300. You may also want to try decreasing the radius of the target by 0.1 mm. The last choice would be to adjust the transform type, however this will introduce some non-linearity into the registration.
"},{"location":"widgets/04_frame_detection.html#manual-frame-detection","title":"Manual frame detection","text":"\u2003\u2003To run manual frame detection select the button Manual Detection. You will need to identify each frame fiducial one-by-one on the same axial slice. If you are unsure of how the stereotactic frame fiducials are numbered you can press Frame Fiducial Legend
to see the mapping. All point fiducials will need to be placed on the same axial slice. When you are finished, press Confirm Frame Fiducials.
Manual frame detection fiducials.
"},{"location":"widgets/04_frame_detection.html#supported-frame-systems","title":"Supported Frame Systems","text":""},{"location":"widgets/04_frame_detection.html#leksell-frame-localizer","title":"Leksell frame localizer","text":"Leksell stereotactic system.
"},{"location":"widgets/04_frame_detection.html#brw-frame-localizer","title":"BRW frame localizer","text":"BRW stereotactic system.
"},{"location":"widgets/04_frame_detection.html#crw-frame","title":"CRW frame","text":"CRW stereotactic system.
"},{"location":"widgets/04_frame_detection.html#automatic-frame-detection-algorithm","title":"Automatic frame detection algorithm","text":"\u2003\u2003The automatic frame detection algorithm first employs an intensity threshold to identify pixel clusters that may belong to an N-localizer. After the entire image volume is scanned, the identified clusters are either accepted or rejected based on the stereotactic frame geometry. The intensity threshold is a binary threshold that results in pixel values less than a specified intensity value to be removed (i.e. brain tissue will be removed). For the Leksell and CRW frame systems, a morphological erosion step is applied to the threshold image followed by a morphological dilation. Erosion of a binary image sharpens the boundaries of foreground pixels, which will act to make features in the image volume \u201cthinner\u201d. The resulting image will contain frame and skull artifact but the frame fiducial markers will no longer be present. To finalize the image mask, a morphological dilation step is applied to the eroded image to enlarge the remaining structures in the image. Since the frame fiducial markers were removed with the erosion step, the objects being enlarged are left-over artifact to be removed - which forms the final image mask. The image mask is inverted and intersected with the threshold image to recover the fiducial markers.
\u2003\u2003The resulting masked image is processed further to obtain connected components (i.e. neighbouring pixels that share the same value). As long as neighbouring pixels share the same value, they will be labelled as a single region. All connected regions are assigned the same integer value to form clusters of connected pixels. Since the dimensions of the N-localizer are known the expected pixel cluster size can be estimated.
"},{"location":"widgets/04_frame_detection.html#leksell-localization-sample","title":"Leksell localization sample","text":""},{"location":"widgets/04_frame_detection.html#brw-localization-sample","title":"BRW localization sample","text":""},{"location":"widgets/04_frame_detection.html#crw-localization-sample","title":"CRW localization sample","text":""},{"location":"widgets/05_registration.html","title":"Registration","text":""},{"location":"widgets/05_registration.html#patient-space-registration","title":"Patient Space Registration","text":"\u2003\u2003All registrations with patient scans will be rigid registrations. With some of the more advanced algorithms you can override this and run non-linear registration but it is strongly discouraged. See the below Algorithms to learn more about each algorithm and the respective settings.
Patient space registration settings.
"},{"location":"widgets/05_registration.html#registration-settings","title":"Registration Settings","text":"\u2003\u2003Within the Reference Volume drop-down box, select the scan you want to co-register all other scans to (Reference). In the Frame Volume drop-down box, select the scan that contains the stereotactic fiducials. In the Floating Volumes drop-down box, all other scans (floating) will be checked to indicate they will be registered to the reference. If there are any floating scans you do not want registered, uncheck them.
Patient space drop-down volume boxes.
\u2003\u2003To begin the registration, press the Run Registration button. The Registration Process box will display updated information during the registration. When the registration is complete, the view will be automatically changed to a compare view.
Patient space drop-down volume boxes.
"},{"location":"widgets/05_registration.html#check-registration-results","title":"Check Registration Results","text":"\u2003\u2003For each registration you will either select Confirm Registration or Decline Registration. If you choose to decline a registration, the registration can be re-run with a different algorithm. To check the registration, it is helpful to use the opacity slider to change the opacity of the foreground scan (floating scan).
Patient space drop-down volume boxes.
\u2003\u2003You can also use the Layer Reveal tool to check the registration in more detail. This tool displays a square that contains half the foreground scan and half the background scan.
Registration layer reveal tool.
\u2003\u2003When finished checking the registrations, any confirmed registered scans will disappear from the Floating Volumes drop-down box, declined scans will still appear in the drop-down box. To re-run the registration, update any settings and press the Run Registration button, all previous registration information, for the current floating scans, will be erased.
"},{"location":"widgets/05_registration.html#algorithms","title":"Algorithms","text":"\u2003\u2003The default algorithm will be NiftyReg using nearest neighbor interpolation when applying the transform. You are able to change the registration algorithm and parameters according to the following information.
"},{"location":"widgets/05_registration.html#niftyreg-reg_aladin","title":"NiftyReg - reg_aladin","text":"For information about this algorithm you can visit this page.
For information about this algorithm you can visit this page.
\u2003\u2003For information about this algorithm you can visit this page. This algorithm gives the user more control over each step. The user can specify the \"stages\" of registration, where a stage consists of a transform and an image metric. Each stage consists of levels with specific values set for iterations, shrink factors, and smoothing sigmas.
\u2003\u2003Click Run Registration
. The registration progression will be updated within the Registration Progress
window. Once registration is completed, you will see the co-registered volumes appear in the floating drop-down box (under Co-registered Volumes
). You will now confirm that the registration results by clicking the Compare Volumes
button. For each registration you will either select Confirm Registration
or Decline Registration
. If you choose to decline a registration, you will be able to re-run the registration with a different algorithm.
"},{"location":"widgets/06_anatomical_fiducials.html","title":"Registration","text":"
Note
To navigate through the 2D view: Move crosshairs in all views: hold Shift
while moving the mouse Zoom in/out: hold the right
mouse button while moving mouse up/down (can hold Control/Command
and scroll) Pan (translate) scan: hold middle-mouse button
while moving the mouse
\u2003\u2003The midline plane will need to be determined, which relies on four points: the anterior commissure (AC), the posterior commissure (PC), and two midline points (Mid 1-2). The midline points should be at least one interhemispheric point and one brainstem point (see section below for landmark positions). These 4 points are then used to define the midline plane, which is used to define the Talaraich coordinate system.
\u2003\u2003To place a fiducial point, click on the place point button () and drop the point at the indicated landmark. Once you have placed AC, PC and at least 2 midlines, click Confirm Fiducials. A new entry will be added to the fiducial table for the point MCP
.
\u2192
Warning
If you modify any fiducial points you will need to press Confirm Fiducials again to re-calculate MCP
The anterior commissure.
"},{"location":"widgets/06_anatomical_fiducials.html#pc-point","title":"PC point","text":"The posterior commissure.
"},{"location":"widgets/06_anatomical_fiducials.html#midline-points","title":"Midline Points","text":""},{"location":"widgets/06_anatomical_fiducials.html#genu","title":"Genu","text":"Genu.
"},{"location":"widgets/06_anatomical_fiducials.html#infracollicular-sulcus","title":"Infracollicular Sulcus","text":"The infracollicular sulcus.
"},{"location":"widgets/06_anatomical_fiducials.html#superior-interpeduncular-fossa","title":"Superior interpeduncular fossa","text":"The superior interpeduncular fossa.
"},{"location":"widgets/07_preoperative_planning.html","title":"Preoperative Planning","text":"
Note
To navigate through the 2D view: Move crosshairs in all views: hold Shift
while moving the mouse Zoom in/out: hold the right
mouse button while moving mouse up/down (can hold Control/Command
and scroll) Pan (translate) scan: hold middle-mouse button
while moving the mouse
\u2003\u2003The planning module contains two coordinate groupboxes, one for ACPC space and the other for stereotactic space. The coordinates are linked to the position of the crosshairs, when the crosshairs move the coordinates are updated in real-time.
The preop planning module.
trajectoryGuide links objects and data in the scene to the plan name. When you switch between plans, the values in the coordinate boxes will update to the current plan values. Before setting a target/entry, you will need to add a new plan. Click Add
in the Plan Name groupbox and set a name for the current plan. Defining the entry/target point is similar to the previous Anatomical fiducials widget except you can also enter exact coordinate values into the ACPC/Stereotactic space coordinate boxes and press \"Update Crosshairs\" to move the crosshairs to the specified coordinates.
Warning
For now, electrode localization in trajectoryGuide is achieved by manually defining the bottom and top of each electrode. The user should place the postoperative imaging volume (CT and/or MRI) inside the patient folder, re-load trajectoryGuide, and run the registration step. Once the postoperative volume is aligned with the reference image volume it can be used to locate the electrode(s).
\u2003\u2003The user must indicate the plan that is associated with the electrode being localized. If only postoperative localization is being performed for the patient a plan name can be defined within this module. The user is then asked to place a fiducial marker at the very tip of the electrode to mark the \u201ctarget\u201d and another fiducial point near where it exists from the skull to indicate the \u201centry\u201d point. Once the two points are placed the \u201cConfirm Electrode\u201d button can be pressed and the postoperative electrode will be rendered in the 2D and 3D views.
The postop localization module.
"},{"location":"widgets/10_postoperative_programming.html","title":"Volume of activated tissue","text":"The volume of activated tissue (VAT) also referred to as the volume of tissue activated (VTA) is a model the predicts the extent and location of neural activation produced by stimulation. In trajectoryGuide, the model proposed by Dembek et al. (2017) has been utilized since it does not require image processing to be conducted prior to VAT computation 1. The calculation of the stimulation field radius based on the DBS stimulation parameters is described the following equation:
r=\\left ( \\frac{pw}{90 \\mu s} \\right )*\\sqrt{0.72\\frac{Vm}{A}*\\frac{I}{165 V/m}}\u2003\u2003where pw is the pulse width (microseconds), A is the amplitude of stimulation (voltage or milliamperes), and I is the impedance (Ohms) of the electrode contact being used for stimulation. The value 0.72 Vm is a constant validated by Dembek et al. (2017) in a previous study 1. A more complex stimulation field model will eventually be incorporated into trajectoryGuide that can handle bipolar stimulation. For now, only a monopolar stimulation model can be generated. To generate the VAT models in trajectoryGuide, the user needs to input the stimulation parameters including the active contact, stimulation amplitude, frequency, pulse width, and impedance
The postop volume of activated tissue module.
1 T. A. Dembek et al., \u201cProbabilistic mapping of deep brain stimulation effects in essential tremor.,\u201d NeuroImage Clin., vol. 13, pp. 164\u2013173, 2017, doi: 10.1016/j.nicl.2016.11.019
"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 1f7aef8..1921ec6 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ