Hi, I’m a postdoc in the lab looking at spatial behaviours in rodents during open field exploration and goal-direction navigation task in virtual reality. More broadly, I am interested in how the brain forges meaningful representations of the physical world to guide meaningful actions in the world. I use techniques such as tetrode, silicon probe and NeuroPixel recording in mice with the aim of recording from many neurons at the same time to observe population-level neural codes.
I’m Chris Halcrow, a research software engingeer with a background in maths and physics. Generally, I’m interested in good software practices in science: openly publishing data, making sure that calculations are reproducible, and that code is transparent and easy to understand. In this lab, I focus on software called SpikeInterface which implements “spike sorting”. This is where you take a recorded electrical signal from the brain and try and decipher when, and which, neurons have fired during the recording.
I am a computer scientist from Brussels, currently researching AI in biomedical contexts, as part of the 2022 cohort of the UKRI CDT in Biomedical AI. Coming from a computational background, I entered neuroscience with an interest in how our brains compute.
I spend most of my time analysing the lab’s electrophysiology recordings, applying a variety of statistical and machine learning tools to try to get insight into what is happening. The entorhinal cortex is a great site for this type of research, as it contains a bunch of neurons that seem to correlate with spatial variables. However, we have not yet managed to combine these functional neuron types into a model that can perform the computations needed for our spatial behaviour. My project aims at (1) finding out whether this building block approach even makes sense and (2) devising and verifying alternative models for spatial computation that are a potential better fit for the circular, dynamic nature of the brain.
I am a joint PhD student in the Nolan lab and the Osterweil lab, my research focuses on understanding the development of audiogenic (or sound-induced) seizures in the Fmr1 knockout mouse, which mimics the symptoms of patients with Fragile X Syndrome. High-intensity sound stimulation makes these mice experience tonic-clonic seizures, which has been linked with neuronal hyperactivation in the dorsal nucleus of their Inferior Colliculus, a major relay station for audiogenic information.
My project involves using RNA Seq and proteomic approaches, as well as imaging and patch clamp electrophysiology to identify the cell types of these hyperactivated neurons, as well as their recruitment dynamics and intracellular electrical properties that lead to audiogenic seizure. Ultimately, we want to asses the effectiveness of various drug and gene therapies on the observed neuronal hyperactivation and the behavioural manifestation of the seizures.
Centre for Discovery Brain Sciences, University of Edinburgh
Emma Wood
Centre for Discovery Brain Sciences, University of Edinburgh
\ No newline at end of file
diff --git a/projects/high_speed_imaging/index.html b/projects/high_speed_imaging/index.html
index fe314fb8d930..ea32b0ac55a1 100644
--- a/projects/high_speed_imaging/index.html
+++ b/projects/high_speed_imaging/index.html
@@ -1 +1 @@
- High speed imaging | Nolan & Sürmeli Labs
High speed imaging
We are collaborating with Istvan Gyongy, Srinjoy Mitra and Robert Henderson in the School of Engineering in Edinburgh to develop applications of single photon avalanche diode (SPAD) technology to imaging of neural activity.
Rationale
Recent progress towards understanding the biological basis for cognition and its disorders has been driven by advances in molecular tools for labelling and manipulation of defined populations of neurons. However, brain circuits operate at a millisecond time scales and our ability to resolve this activity is limited.
Electrophysiological methods have the required temporal precision, but do not reliably identify multiple individual neurons within large populations, a pre-requisite for many important questions. In contrast, imaging approaches based on detection of intracellular Ca2+ signals can track activity in large populations of neurons but give only an indirect readout of neuronal activity with limited temporal resolution.
New genetically encoded voltage indicators (GEVIs) address many of the shortcomings of Ca2+ imaging. Crucially, they report both action potentials and subthreshold electrical activity in defined neurons with millisecond resolution. However, application of GEVIs will require a new generation of cameras with frame rates sufficient to monitor millisecond scale changes.
Goals
We are developing and validating neuroscience applications of SPAD sensor technology developed by our collaborators. SPADs are electronic devices that when activated by a single photon cause an avalanche of electrons and a large electric current. Because SPADs detect the time at which individual photons arrive, they are well suited to extremely high speed and low light imaging. In contrast, standard camera sensors must bin photons across a time window, which limits their sensitivity and temporal resolution. In our prototype SPAD-based cameras, the sensor chip is a similar size to sensors used in miniature microscopes we currently use for Ca2+ imaging in behaving rodents. It is therefore physically feasible to use SPADs to image activity even in freely behaving animals.
Progress
We published the first proof-of-principle demonstration that SPAD-based cameras can detect neuronal activity reported with GEVIs (Tian et al. 2022). This required introducing viruses encoding GEVIs into a mouse brain, generate known activity patterns in neurons expressing the GEVIs and using SPAD cameras to image the signal from the GEVIs (see figure).
We are now evaluating new generation SPAD devices and working to miniaturise SPAD cameras for imaging in freely moving animals.
References
\ No newline at end of file
+ High speed imaging | Nolan & Sürmeli Labs
High speed imaging
We are collaborating with Istvan Gyongy, Srinjoy Mitra and Robert Henderson in the School of Engineering in Edinburgh to develop applications of single photon avalanche diode (SPAD) technology to imaging of neural activity.
Rationale
Recent progress towards understanding the biological basis for cognition and its disorders has been driven by advances in molecular tools for labelling and manipulation of defined populations of neurons. However, brain circuits operate at a millisecond time scales and our ability to resolve this activity is limited.
Electrophysiological methods have the required temporal precision, but do not reliably identify multiple individual neurons within large populations, a pre-requisite for many important questions. In contrast, imaging approaches based on detection of intracellular Ca2+ signals can track activity in large populations of neurons but give only an indirect readout of neuronal activity with limited temporal resolution.
New genetically encoded voltage indicators (GEVIs) address many of the shortcomings of Ca2+ imaging. Crucially, they report both action potentials and subthreshold electrical activity in defined neurons with millisecond resolution. However, application of GEVIs will require a new generation of cameras with frame rates sufficient to monitor millisecond scale changes.
Goals
We are developing and validating neuroscience applications of SPAD sensor technology developed by our collaborators. SPADs are electronic devices that when activated by a single photon cause an avalanche of electrons and a large electric current. Because SPADs detect the time at which individual photons arrive, they are well suited to extremely high speed and low light imaging. In contrast, standard camera sensors must bin photons across a time window, which limits their sensitivity and temporal resolution. In our prototype SPAD-based cameras, the sensor chip is a similar size to sensors used in miniature microscopes we currently use for Ca2+ imaging in behaving rodents. It is therefore physically feasible to use SPADs to image activity even in freely behaving animals.
Progress
We published the first proof-of-principle demonstration that SPAD-based cameras can detect neuronal activity reported with GEVIs (Tian et al., 2022). This required introducing viruses encoding GEVIs into a mouse brain, generate known activity patterns in neurons expressing the GEVIs and using SPAD cameras to image the signal from the GEVIs (see figure).
We are now evaluating new generation SPAD devices and working to miniaturise SPAD cameras for imaging in freely moving animals.
Establishing the biological basis of cognition and its disorders will require high precision spatiotemporal measurements of neural activity. Recently developed genetically encoded voltage indicators (GEVIs) report both spiking and subthreshold activity of identified neurons. However, maximally capitalizing on the potential of GEVIs will require imaging at millisecond time scales, which remains challenging with standard camera systems. Here, application of single photon avalanche diode (SPAD) sensors is reported to image neural activity at kilohertz frame rates. SPADs are electronic devices that when activated by a single photon cause an avalanche of electrons and a large electric current. An array of SPAD sensors is used to image individual neurons expressing the GEVI Voltron-JF525-HTL. It is shown that subthreshold and spiking activity can be resolved with shot noise limited signals at frame rates of up to 10 kHz. SPAD imaging is able to reveal millisecond scale synchronization of neural activity in an ex vivo seizure model. SPAD sensors may have widespread applications for investigation of millisecond timescale neural dynamics.
\ No newline at end of file
diff --git a/projects/spatial_memory/index.html b/projects/spatial_memory/index.html
index 0bbe930ff750..6391d6399ac7 100644
--- a/projects/spatial_memory/index.html
+++ b/projects/spatial_memory/index.html
@@ -1 +1 @@
- Spatial Memory | Nolan & Sürmeli Labs
Spatial Memory
The Neural Map
Our sense of navigation is not something we tend to consider too deeply in our day-to-day lives. Nevertheless, we somehow have a subconscious knowledge of were we are in the world and where we are going. We use it everywhere - to walk home, to go to the grocery store, to walk to the bathroom in the middle of the night.
How we navigate is largely dependent on what information is available to us. The brain is complex and uses a variety of strategies to determine our position, relative to other landmarks and to start or goal locations. Among those is beaconing, a strategy where we use a distant object to navigate to - say, “Hey, there’s a neon sign here saying ‘Pub’” - that’s probably the entrance to the pub. But what if you don’t have those clues?
Say you’re out on a hike with your friends. You’re full of energy during the day and make plenty of detours before you get to the campsite and settle in for a restful night, listening to the soothing patter of rain on the tent. Oh no! It’s 2am and you’re woken up in what an optimist would call a ‘puddle’, and anyone else might label ‘a decidedly sizable body of water’.
So much for the guy on Facebook Marketplace promising that the tent was waterproof.
You and your friend make the decision to trek back to the car park. You wish one of you had brought a headlamp at least - you can’t use any landmarks to navigate. Despite that, you somehow manage to make it back in one piece, going straight from the campsite to the carpark without any of the diversions you had undertaken on the way there. How?
Two types of sensory information are required to update where we think we are: Allothetic information we get from the outside environment, like that ‘Pub’ sign. Idiothetic information is generated by the body itself. For example, the brain signals responsible for walking to the pub also provide us with information about how far we’ve walked so far. Path integration uses these idiothetic cues to transverse the mental map in our head. For example, a mouse that is foraging and takes a long, winding trajectory towards its goal will have ‘calculated’ its displacement from its nest and can make a beeline safely back if it suddenly runs into a fox.
Studying path integration is complicated by the fact that it’s only one part of the mechanism behind navigation. It’s an error-prone system, and so it works in combination with other information to create a path - for example, a mouse might use remembered landmarks, olfaction and even way-marking to navigate through an environment. Studying the neurons specifically involved in path integration therefore requires us to remove all other possibilities but using path integration. How do we do that?
One way of studying path integration is using virtual reality. Mice run through a VR corridor on a treadmill until there’s a visual cue. If they stop at the visual cue, they get a treat. After a few repetitions, the visual cue is removed - yet the mice still stop in the place where the visual cue would have been. If there are no external cues for the mice to see, how do they know when to stop?
The mice can’t use other navigational strategies, such as beaconing, because there’s no allothetic cues, so they’re left with path integration. They might have remembered the time it took to run to the reward zone - but when the speed of the treadmill was changed, the mice didn’t overshoot the reward zone, despite running faster. This implies they weren’t measuring the time, but were instead getting their information from a collection of self-motion cues. This includes proprioception (the sense of your self-position and self-movement) along with the vestibular system (your inner ear, which provides a sense of balance and awareness of our head and body in space) and motor efference. Over longer distances, the accuracy of this system drops without external input such as landmarks, as small errors start to accumulate and the mice start to stop further away from the reward zone.
Cell types
Path integration is just one of the many navigational strategies we want to study to understand the basis of the neural map. There are many specialised cell types involved in navigation. For the purpose of this website, let’s focus on three: place cells, grid cells, and head direction cells.
Grid cells are place-modulated neurons located primarily in the entorhinal cortex that fire periodically in space, mapping a triangular grid across an environment. As an animal moves through space, grid cells fire to create hexagonal patterns that allow us to map this space. Again, a population of grid cells alone can encode a spatial map
Place cells located in the hippocampus, fire when an animal enters a specific location in space. A population of place cells alone can encode a spatial map.
Head direction cells are primarily found in the postsubiculum, and provide directional information by preferentially firing in specific directions. A population of head direction cells can encode which direction you are facing in your spatial map.
These cells potentially have uses outside of encoding physical space - their coding mechanism may be used for more general problem sets, such as cognitive mapping. Cognitive mapping refers to swapping out the three-dimensional world we interact with for a different, continuous dimension that represents an abstract concept. The same way we might encode the map of our room, we might use the hexagonal-firing properties of grid cells to represent, for example, conceptual spaces, such as hierarchically organizing ideas within our brain, or recording temporal sequences of events.
Neurons in the retrohippocampal cortices play crucial roles in spatial memory. Many retrohippocampal neu-rons have firing fields that are selectively active at specific locations, with memory for rewarded locations associated with reorganization of these firing fields. Whether this is the sole strategy for representing spatial memories is unclear. Here, we demonstrate that during a spatial memory task retrohippocampal neurons encode location through ramping activity that extends across segments of a linear track approaching and following a reward, with the rewarded location represented by offsets or switches in the slope of the ramping activity. Ramping representations could be maintained independently of trial outcome and cues marking the reward location, indicating that they result from recall of the track structure. When recorded in an open arena, neurons that generated ramping activity during the spatial memory task were more numerous than grid or border cells, with a majority showing spatial firing that did not meet criteria for classification as grid or border representations. Encoding of rewarded locations through offsets and switches in the slope of ramping activ-ity also emerged in recurrent neural network models trained to solve a similar spatial memory task. Impaired performance of model networks following disruption of outputs from ramping neurons is consistent with this coding strategy supporting navigation to recalled locations of behavioral significance. Our results suggest that encoding of learned spaces by retrohippocampal networks employs both discrete firing fields and continuous ramping representations. We hypothesize that retrohippocampal ramping activity mediates readout of learned models for goal-directed navigation.
\ No newline at end of file
+ Spatial Memory | Nolan & Sürmeli Labs
Spatial Memory
The Neural Map
Our sense of navigation is not something we tend to consider too deeply in our day-to-day lives. Nevertheless, we somehow have a subconscious knowledge of were we are in the world and where we are going. We use it everywhere - to walk home, to go to the grocery store, to walk to the bathroom in the middle of the night.
How we navigate is largely dependent on what information is available to us. The brain is complex and uses a variety of strategies to determine our position, relative to other landmarks and to start or goal locations. Among those is beaconing, a strategy where we use a distant object to navigate to - say, “Hey, there’s a neon sign here saying ‘Pub’” - that’s probably the entrance to the pub. But what if you don’t have those clues?
Path Integration
Say you are out on a hike with your friends. You are full of energy during the day and make plenty of detours before you get to the campsite and settle in for a restful night, listening to the soothing patter of rain on the tent. Oh no! It is 2am and you have woken up in what an optimist would call a ‘puddle’, and anyone else might label ‘a decidedly sizable body of water’.
So much for the guy on Facebook Marketplace promising that the tent was waterproof.
You and your friend make the decision to trek back to the car park. You wish one of you had brought a headlamp at least - you can not use any landmarks to navigate. Despite that, you somehow manage to make it back in one piece, going straight from the campsite to the carpark without any of the diversions you had undertaken on the way there. How?
Two types of sensory information are required to update where we think we are: Allothetic information we get from the outside environment, like that ‘Pub’ sign. Idiothetic information is generated by the body itself. For example, the brain signals responsible for walking to the pub also provide us with information about how far we have walked so far. Path integration uses these idiothetic cues to transverse the mental map in our head. For example, a mouse that is foraging and takes a long, winding trajectory towards its goal will have ‘calculated’ its displacement from its nest and can make a beeline safely back if it suddenly runs into a fox.
Studying path integration is complicated by the fact that it’s only one part of the mechanism behind navigation. It’s an error-prone system, and so it works in combination with other information to create a path - for example, a mouse might use remembered landmarks, olfaction and even way-marking to navigate through an environment. Studying the neurons specifically involved in path integration therefore requires us to remove all other possibilities but using path integration. How do we do that?
One way of studying path integration is using virtual reality. Mice run through a VR corridor on a treadmill until there is a visual cue. If they stop at the visual cue, they get a treat. After a few repetitions, the visual cue is removed - yet the mice still stop in the place where the visual cue would have been. If there are no external cues for the mice to see, how do they know when to stop?
The mice can not use other navigational strategies, such as beaconing, because there are no allothetic cues, so they are left with path integration (Tennant et al., 2022). They might have remembered the time it took to run to the reward zone - but when the speed of the treadmill was changed, the mice did not overshoot the reward zone, despite running faster. This implies they were not measuring the time, but were instead getting their information from a collection of self-motion cues. This includes proprioception (the sense of your self-position and self-movement) along with the vestibular system (your inner ear, which provides a sense of balance and awareness of our head and body in space) and motor efference. Over longer distances, the accuracy of this system drops without external input such as landmarks, as small errors start to accumulate and the mice start to stop further away from the reward zone.
Cell types
Path integration is just one of the many navigational strategies we want to study to understand the basis of the neural map. There are many specialised cell types involved in navigation. For the purpose of this website, let us focus on three: place cells, grid cells, and head direction cells.
Grid cells are place-modulated neurons located primarily in the entorhinal cortex that fire periodically in space, mapping a triangular grid across an environment. As an animal moves through space, grid cells fire to create hexagonal patterns that allow us to map this space. Again, a population of grid cells alone can encode a spatial map (Clark & Nolan, 2024).
Place cells located in the hippocampus, fire when an animal enters a specific location in space. A population of place cells alone can encode a spatial map.
Head direction cells are primarily found in the postsubiculum, and provide directional information by preferentially firing in specific directions. A population of head direction cells can encode which direction you are facing in your spatial map (Gerlei et al., 2020).
These cells potentially have uses outside of encoding physical space - their coding mechanism may be used for more general problem sets, such as cognitive mapping. Cognitive mapping refers to swapping out the three-dimensional world we interact with for a different, continuous dimension that represents an abstract concept. The same way we might encode the map of our room, we might use the hexagonal-firing properties of grid cells to represent, for example, conceptual spaces, such as hierarchically organizing ideas within our brain, or recording temporal sequences of events.
Grid firing fields have been proposed as a neural substrate for spatial localisation in general or for path integration in particular. To distinguish these possibilities, we investigate firing of grid and non-grid cells in the mouse medial entorhinal cortex during a location memory task. We find that grid firing can either be anchored to the task environment, or can encode distance travelled independently of the task reference frame. Anchoring varied between and within sessions, while spatial firing of non-grid cells was either coherent with the grid population, or was stably anchored to the task environment. We took advantage of the variability in task-anchoring to evaluate whether and when encoding of location by grid cells might contribute to behaviour. We find that when reward location is indicated by a visual cue, performance is similar regardless of whether grid cells are task-anchored or not, arguing against a role for grid representations when location cues are available. By contrast, in the absence of the visual cue, performance was enhanced when grid cells were anchored to the task environment. Our results suggest that anchoring of grid cells to task reference frames selectively enhances performance when path integration is required.
Neurons in the retrohippocampal cortices play crucial roles in spatial memory. Many retrohippocampal neu-rons have firing fields that are selectively active at specific locations, with memory for rewarded locations associated with reorganization of these firing fields. Whether this is the sole strategy for representing spatial memories is unclear. Here, we demonstrate that during a spatial memory task retrohippocampal neurons encode location through ramping activity that extends across segments of a linear track approaching and following a reward, with the rewarded location represented by offsets or switches in the slope of the ramping activity. Ramping representations could be maintained independently of trial outcome and cues marking the reward location, indicating that they result from recall of the track structure. When recorded in an open arena, neurons that generated ramping activity during the spatial memory task were more numerous than grid or border cells, with a majority showing spatial firing that did not meet criteria for classification as grid or border representations. Encoding of rewarded locations through offsets and switches in the slope of ramping activ-ity also emerged in recurrent neural network models trained to solve a similar spatial memory task. Impaired performance of model networks following disruption of outputs from ramping neurons is consistent with this coding strategy supporting navigation to recalled locations of behavioral significance. Our results suggest that encoding of learned spaces by retrohippocampal networks employs both discrete firing fields and continuous ramping representations. We hypothesize that retrohippocampal ramping activity mediates readout of learned models for goal-directed navigation.
Grid and head direction codes represent cognitive spaces for navigation and memory. Pure grid cells generate grid codes that have been assumed to be independent of head direction, whereas conjunctive cells generate grid representations that are tuned to a single head direction. Here, we demonstrate that pure grid cells also encode head direction, but through distinct mechanisms. We show that individual firing fields of pure grid cells are tuned to multiple head directions, with the preferred sets of directions differing between fields. This local directional modulation is not predicted by previous continuous attractor or oscillatory interference models of grid firing but is accounted for by models in which pure grid cells integrate inputs from co-aligned conjunctive cells with firing rates that differ between their fields. We suggest that local directional signals from grid cells may contribute to downstream computations by decorrelating different points of view from the same location. Neurons with grid firing fields are thought to play important roles in spatial cognition. Here, the authors show that in contrast to assumptions underlying current models and analyses, grid fields are modulated by local head direction; this suggests different mechanisms and new roles for grid firing.
\ No newline at end of file
diff --git a/research/index.html b/research/index.html
index 88edd82d8078..1e30d3bd15a9 100644
--- a/research/index.html
+++ b/research/index.html
@@ -1 +1 @@
- Research | Nolan & Sürmeli Labs
Research
We aim to understand the cellular and molecular basis for computations that underlie episodic memory.
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index e91b82d665c7..15d4df47d61b 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
- https://mattnolanlab.github.io/news/announcement_1/2024-03-14T19:35:00+00:00https://mattnolanlab.github.io/news/HarryPaper/2024-03-28T22:42:00+00:00https://mattnolanlab.github.io/blog/2015/formatting-and-links/2015-03-15T16:40:16+00:00https://mattnolanlab.github.io/blog/2015/images/2015-05-15T21:01:00+00:00https://mattnolanlab.github.io/blog/2015/code/2015-07-15T15:09:00+00:00https://mattnolanlab.github.io/blog/2015/math/2015-10-20T15:12:00+00:00https://mattnolanlab.github.io/blog/2015/disqus-comments/2015-10-20T15:59:00+00:00https://mattnolanlab.github.io/blog/2020/twitter/2020-09-28T15:12:00+00:00https://mattnolanlab.github.io/blog/2020/github-metadata/2020-09-28T21:01:00+00:00https://mattnolanlab.github.io/blog/2021/distill/2021-05-22T00:00:00+00:00https://mattnolanlab.github.io/blog/2021/diagrams/2021-07-04T17:39:00+00:00https://mattnolanlab.github.io/blog/2022/redirect/2022-02-01T17:39:00+00:00https://mattnolanlab.github.io/blog/2022/giscus-comments/2022-12-10T15:59:00+00:00https://mattnolanlab.github.io/blog/2023/table-of-contents/2023-03-20T15:59:00+00:00https://mattnolanlab.github.io/blog/2023/tables/2023-03-20T18:37:00+00:00https://mattnolanlab.github.io/blog/2023/videos/2023-04-24T21:01:00+00:00https://mattnolanlab.github.io/blog/2023/audios/2023-04-25T10:25:00+00:00https://mattnolanlab.github.io/blog/2023/sidebar-table-of-contents/2023-04-25T14:14:00+00:00https://mattnolanlab.github.io/blog/2023/custom-blockquotes/2023-05-12T19:53:00+00:00https://mattnolanlab.github.io/blog/2023/jupyter-notebook/2023-07-04T12:57:00+00:00https://mattnolanlab.github.io/blog/2023/post-bibliography/2023-07-12T13:56:00+00:00https://mattnolanlab.github.io/blog/2023/tikzjax/2023-12-12T22:25:00+00:00https://mattnolanlab.github.io/blog/2024/chartjs/2024-01-26T01:04:00+00:00https://mattnolanlab.github.io/blog/2024/echarts/2024-01-26T16:03:00+00:00https://mattnolanlab.github.io/blog/2024/geojson-map/2024-01-26T17:57:00+00:00https://mattnolanlab.github.io/blog/2024/vega-lite/2024-01-27T00:20:00+00:00https://mattnolanlab.github.io/blog/2024/advanced-images/2024-01-27T11:46:00+00:00https://mattnolanlab.github.io/blog/2024/code-diff/2024-01-27T19:22:00+00:00https://mattnolanlab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/2022-04-23T23:20:09+00:00https://mattnolanlab.github.io/projects/high_speed_imaging/2024-04-13T13:30:02+00:00https://mattnolanlab.github.io/projects/spatial_memory/2024-04-13T13:30:02+00:00https://mattnolanlab.github.io/https://mattnolanlab.github.io/data-and-code/https://mattnolanlab.github.io/news/https://mattnolanlab.github.io/nolan-lab/https://mattnolanlab.github.io/surmeli-lab/https://mattnolanlab.github.io/publications/https://mattnolanlab.github.io/research/https://mattnolanlab.github.io/lab-values/https://mattnolanlab.github.io/blog/tag/formatting/https://mattnolanlab.github.io/blog/tag/links/https://mattnolanlab.github.io/blog/tag/images/https://mattnolanlab.github.io/blog/tag/code/https://mattnolanlab.github.io/blog/tag/math/https://mattnolanlab.github.io/blog/tag/comments/https://mattnolanlab.github.io/blog/tag/metadata/https://mattnolanlab.github.io/blog/tag/distill/https://mattnolanlab.github.io/blog/tag/diagrams/https://mattnolanlab.github.io/blog/tag/toc/https://mattnolanlab.github.io/blog/tag/tables/https://mattnolanlab.github.io/blog/tag/videos/https://mattnolanlab.github.io/blog/tag/audios/https://mattnolanlab.github.io/blog/tag/sidebar/https://mattnolanlab.github.io/blog/tag/blockquotes/https://mattnolanlab.github.io/blog/tag/jupyter/https://mattnolanlab.github.io/blog/tag/bib/https://mattnolanlab.github.io/blog/tag/charts/https://mattnolanlab.github.io/blog/tag/maps/https://mattnolanlab.github.io/blog/category/sample-posts/https://mattnolanlab.github.io/blog/category/external-services/https://mattnolanlab.github.io/blog/2015/https://mattnolanlab.github.io/blog/2020/https://mattnolanlab.github.io/blog/2021/https://mattnolanlab.github.io/blog/2022/https://mattnolanlab.github.io/blog/2023/https://mattnolanlab.github.io/blog/2024/https://mattnolanlab.github.io/lighthouse_results/desktop/alshedivat_github_io_al_folio_.html2024-04-13T13:29:31+00:00https://mattnolanlab.github.io/lighthouse_results/mobile/alshedivat_github_io_al_folio_.html2024-04-13T13:29:31+00:00
\ No newline at end of file
+ https://mattnolanlab.github.io/news/announcement_1/2024-03-14T19:35:00+00:00https://mattnolanlab.github.io/news/HarryPaper/2024-03-28T22:42:00+00:00https://mattnolanlab.github.io/blog/2015/formatting-and-links/2015-03-15T16:40:16+00:00https://mattnolanlab.github.io/blog/2015/images/2015-05-15T21:01:00+00:00https://mattnolanlab.github.io/blog/2015/code/2015-07-15T15:09:00+00:00https://mattnolanlab.github.io/blog/2015/math/2015-10-20T15:12:00+00:00https://mattnolanlab.github.io/blog/2015/disqus-comments/2015-10-20T15:59:00+00:00https://mattnolanlab.github.io/blog/2020/twitter/2020-09-28T15:12:00+00:00https://mattnolanlab.github.io/blog/2020/github-metadata/2020-09-28T21:01:00+00:00https://mattnolanlab.github.io/blog/2021/distill/2021-05-22T00:00:00+00:00https://mattnolanlab.github.io/blog/2021/diagrams/2021-07-04T17:39:00+00:00https://mattnolanlab.github.io/blog/2022/redirect/2022-02-01T17:39:00+00:00https://mattnolanlab.github.io/blog/2022/giscus-comments/2022-12-10T15:59:00+00:00https://mattnolanlab.github.io/blog/2023/table-of-contents/2023-03-20T15:59:00+00:00https://mattnolanlab.github.io/blog/2023/tables/2023-03-20T18:37:00+00:00https://mattnolanlab.github.io/blog/2023/videos/2023-04-24T21:01:00+00:00https://mattnolanlab.github.io/blog/2023/audios/2023-04-25T10:25:00+00:00https://mattnolanlab.github.io/blog/2023/sidebar-table-of-contents/2023-04-25T14:14:00+00:00https://mattnolanlab.github.io/blog/2023/custom-blockquotes/2023-05-12T19:53:00+00:00https://mattnolanlab.github.io/blog/2023/jupyter-notebook/2023-07-04T12:57:00+00:00https://mattnolanlab.github.io/blog/2023/post-bibliography/2023-07-12T13:56:00+00:00https://mattnolanlab.github.io/blog/2023/tikzjax/2023-12-12T22:25:00+00:00https://mattnolanlab.github.io/blog/2024/chartjs/2024-01-26T01:04:00+00:00https://mattnolanlab.github.io/blog/2024/echarts/2024-01-26T16:03:00+00:00https://mattnolanlab.github.io/blog/2024/geojson-map/2024-01-26T17:57:00+00:00https://mattnolanlab.github.io/blog/2024/vega-lite/2024-01-27T00:20:00+00:00https://mattnolanlab.github.io/blog/2024/advanced-images/2024-01-27T11:46:00+00:00https://mattnolanlab.github.io/blog/2024/code-diff/2024-01-27T19:22:00+00:00https://mattnolanlab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/2022-04-23T23:20:09+00:00https://mattnolanlab.github.io/projects/high_speed_imaging/2024-04-13T14:21:42+00:00https://mattnolanlab.github.io/projects/spatial_memory/2024-04-13T14:21:42+00:00https://mattnolanlab.github.io/https://mattnolanlab.github.io/data-and-code/https://mattnolanlab.github.io/news/https://mattnolanlab.github.io/nolan-lab/https://mattnolanlab.github.io/surmeli-lab/https://mattnolanlab.github.io/publications/https://mattnolanlab.github.io/research/https://mattnolanlab.github.io/lab-values/https://mattnolanlab.github.io/blog/tag/formatting/https://mattnolanlab.github.io/blog/tag/links/https://mattnolanlab.github.io/blog/tag/images/https://mattnolanlab.github.io/blog/tag/code/https://mattnolanlab.github.io/blog/tag/math/https://mattnolanlab.github.io/blog/tag/comments/https://mattnolanlab.github.io/blog/tag/metadata/https://mattnolanlab.github.io/blog/tag/distill/https://mattnolanlab.github.io/blog/tag/diagrams/https://mattnolanlab.github.io/blog/tag/toc/https://mattnolanlab.github.io/blog/tag/tables/https://mattnolanlab.github.io/blog/tag/videos/https://mattnolanlab.github.io/blog/tag/audios/https://mattnolanlab.github.io/blog/tag/sidebar/https://mattnolanlab.github.io/blog/tag/blockquotes/https://mattnolanlab.github.io/blog/tag/jupyter/https://mattnolanlab.github.io/blog/tag/bib/https://mattnolanlab.github.io/blog/tag/charts/https://mattnolanlab.github.io/blog/tag/maps/https://mattnolanlab.github.io/blog/category/sample-posts/https://mattnolanlab.github.io/blog/category/external-services/https://mattnolanlab.github.io/blog/2015/https://mattnolanlab.github.io/blog/2020/https://mattnolanlab.github.io/blog/2021/https://mattnolanlab.github.io/blog/2022/https://mattnolanlab.github.io/blog/2023/https://mattnolanlab.github.io/blog/2024/https://mattnolanlab.github.io/lighthouse_results/desktop/alshedivat_github_io_al_folio_.html2024-04-13T14:21:17+00:00https://mattnolanlab.github.io/lighthouse_results/mobile/alshedivat_github_io_al_folio_.html2024-04-13T14:21:17+00:00
\ No newline at end of file