diff --git a/assets/img/kaja_spatial_memory-1400.webp b/assets/img/kaja_spatial_memory-1400.webp new file mode 100644 index 000000000000..72166d56bd52 Binary files /dev/null and b/assets/img/kaja_spatial_memory-1400.webp differ diff --git a/assets/img/kaja_spatial_memory-480.webp b/assets/img/kaja_spatial_memory-480.webp new file mode 100644 index 000000000000..bb754a855c33 Binary files /dev/null and b/assets/img/kaja_spatial_memory-480.webp differ diff --git a/assets/img/kaja_spatial_memory-800.webp b/assets/img/kaja_spatial_memory-800.webp new file mode 100644 index 000000000000..30d8c27a936e Binary files /dev/null and b/assets/img/kaja_spatial_memory-800.webp differ diff --git a/assets/img/kaja_spatial_memory.png b/assets/img/kaja_spatial_memory.png new file mode 100644 index 000000000000..d09e5a97ad50 Binary files /dev/null and b/assets/img/kaja_spatial_memory.png differ diff --git a/assets/jupyter/blog.ipynb.html b/assets/jupyter/blog.ipynb.html index ae2dbc304a9d..251d649d5ec4 100644 --- a/assets/jupyter/blog.ipynb.html +++ b/assets/jupyter/blog.ipynb.html @@ -1,4 +1,4 @@ - jekyll-jupyter-notebook20240413-1853-4rwxej \ No newline at end of file + Nolan Lab | Nolan & Sürmeli Labs

Nolan Lab

PI

people/matt.jpg
Matt Nolan

If you are interested in joining the lab then please email me with a CV and a brief description of projects you’d like to work on.

Postdocs

people/ale.jpg
Alessandro di Filippo

people/bri.jpg
Brianna Vandrey

people/harry.jpg
Harry Clark

people/fill.png
Jack Armstrong

Research Software Engineers

people/chris.jpeg
Chris Halcrow

PhD Students

people/fill.png
Ian Hawes

people/junji.jpg
Junji Hua

people/wolf.jpg
Wolf De Wulf

people/fill.png
Yifang Wang

people/zita.png
Zita Francsics

Collaborators

Istvan Gyongy

School of Engineering, University of Edinburgh


Matthias Hennig

School of Informatics, University of Edinburgh


Srinjoy Mitra

School of Engineering, University of Edinburgh


Robert Henderson

School of Engineering, University of Edinburgh


Benjamin Grewe

University of Zurich


Ioannis Papastathopoulos

School of Mathematics, University of Edinburgh


Ian Duguid

Centre for Discovery Brain Sciences, University of Edinburgh


Emma Wood

Centre for Discovery Brain Sciences, University of Edinburgh


\ No newline at end of file diff --git a/projects/high_speed_imaging/index.html b/projects/high_speed_imaging/index.html index fe314fb8d930..ea32b0ac55a1 100644 --- a/projects/high_speed_imaging/index.html +++ b/projects/high_speed_imaging/index.html @@ -1 +1 @@ - High speed imaging | Nolan & Sürmeli Labs

High speed imaging

We are collaborating with Istvan Gyongy, Srinjoy Mitra and Robert Henderson in the School of Engineering in Edinburgh to develop applications of single photon avalanche diode (SPAD) technology to imaging of neural activity.

Rationale

Recent progress towards understanding the biological basis for cognition and its disorders has been driven by advances in molecular tools for labelling and manipulation of defined populations of neurons. However, brain circuits operate at a millisecond time scales and our ability to resolve this activity is limited.

Electrophysiological methods have the required temporal precision, but do not reliably identify multiple individual neurons within large populations, a pre-requisite for many important questions. In contrast, imaging approaches based on detection of intracellular Ca2+ signals can track activity in large populations of neurons but give only an indirect readout of neuronal activity with limited temporal resolution.

New genetically encoded voltage indicators (GEVIs) address many of the shortcomings of Ca2+ imaging. Crucially, they report both action potentials and subthreshold electrical activity in defined neurons with millisecond resolution. However, application of GEVIs will require a new generation of cameras with frame rates sufficient to monitor millisecond scale changes.

Goals

We are developing and validating neuroscience applications of SPAD sensor technology developed by our collaborators. SPADs are electronic devices that when activated by a single photon cause an avalanche of electrons and a large electric current. Because SPADs detect the time at which individual photons arrive, they are well suited to extremely high speed and low light imaging. In contrast, standard camera sensors must bin photons across a time window, which limits their sensitivity and temporal resolution. In our prototype SPAD-based cameras, the sensor chip is a similar size to sensors used in miniature microscopes we currently use for Ca2+ imaging in behaving rodents. It is therefore physically feasible to use SPADs to image activity even in freely behaving animals.

Progress

We published the first proof-of-principle demonstration that SPAD-based cameras can detect neuronal activity reported with GEVIs (Tian et al. 2022). This required introducing viruses encoding GEVIs into a mouse brain, generate known activity patterns in neurons expressing the GEVIs and using SPAD cameras to image the signal from the GEVIs (see figure).

We are now evaluating new generation SPAD devices and working to miniaturise SPAD cameras for imaging in freely moving animals.

References

\ No newline at end of file + High speed imaging | Nolan & Sürmeli Labs

High speed imaging

We are collaborating with Istvan Gyongy, Srinjoy Mitra and Robert Henderson in the School of Engineering in Edinburgh to develop applications of single photon avalanche diode (SPAD) technology to imaging of neural activity.

Rationale

Recent progress towards understanding the biological basis for cognition and its disorders has been driven by advances in molecular tools for labelling and manipulation of defined populations of neurons. However, brain circuits operate at a millisecond time scales and our ability to resolve this activity is limited.

Electrophysiological methods have the required temporal precision, but do not reliably identify multiple individual neurons within large populations, a pre-requisite for many important questions. In contrast, imaging approaches based on detection of intracellular Ca2+ signals can track activity in large populations of neurons but give only an indirect readout of neuronal activity with limited temporal resolution.

New genetically encoded voltage indicators (GEVIs) address many of the shortcomings of Ca2+ imaging. Crucially, they report both action potentials and subthreshold electrical activity in defined neurons with millisecond resolution. However, application of GEVIs will require a new generation of cameras with frame rates sufficient to monitor millisecond scale changes.

Goals

We are developing and validating neuroscience applications of SPAD sensor technology developed by our collaborators. SPADs are electronic devices that when activated by a single photon cause an avalanche of electrons and a large electric current. Because SPADs detect the time at which individual photons arrive, they are well suited to extremely high speed and low light imaging. In contrast, standard camera sensors must bin photons across a time window, which limits their sensitivity and temporal resolution. In our prototype SPAD-based cameras, the sensor chip is a similar size to sensors used in miniature microscopes we currently use for Ca2+ imaging in behaving rodents. It is therefore physically feasible to use SPADs to image activity even in freely behaving animals.

Progress

We published the first proof-of-principle demonstration that SPAD-based cameras can detect neuronal activity reported with GEVIs (Tian et al., 2022). This required introducing viruses encoding GEVIs into a mouse brain, generate known activity patterns in neurons expressing the GEVIs and using SPAD cameras to image the signal from the GEVIs (see figure).

We are now evaluating new generation SPAD devices and working to miniaturise SPAD cameras for imaging in freely moving animals.

References

2022

  1. Single Photon Kilohertz Frame Rate Imaging of Neural Activity
    Tian Tian ,  Yifang Yuan ,  Srinjoy Mitra , and 2 more authors
    Advanced Science, Sep 2022
\ No newline at end of file diff --git a/projects/spatial_memory/index.html b/projects/spatial_memory/index.html index 0bbe930ff750..6391d6399ac7 100644 --- a/projects/spatial_memory/index.html +++ b/projects/spatial_memory/index.html @@ -1 +1 @@ - Spatial Memory | Nolan & Sürmeli Labs

Spatial Memory

The Neural Map

Our sense of navigation is not something we tend to consider too deeply in our day-to-day lives. Nevertheless, we somehow have a subconscious knowledge of were we are in the world and where we are going. We use it everywhere - to walk home, to go to the grocery store, to walk to the bathroom in the middle of the night.

How we navigate is largely dependent on what information is available to us. The brain is complex and uses a variety of strategies to determine our position, relative to other landmarks and to start or goal locations. Among those is beaconing, a strategy where we use a distant object to navigate to - say, “Hey, there’s a neon sign here saying ‘Pub’” - that’s probably the entrance to the pub. But what if you don’t have those clues?

This is where path integration comes in (Tennant et al., 2022).


Path Integration

Say you’re out on a hike with your friends. You’re full of energy during the day and make plenty of detours before you get to the campsite and settle in for a restful night, listening to the soothing patter of rain on the tent. Oh no! It’s 2am and you’re woken up in what an optimist would call a ‘puddle’, and anyone else might label ‘a decidedly sizable body of water’.

So much for the guy on Facebook Marketplace promising that the tent was waterproof.

You and your friend make the decision to trek back to the car park. You wish one of you had brought a headlamp at least - you can’t use any landmarks to navigate. Despite that, you somehow manage to make it back in one piece, going straight from the campsite to the carpark without any of the diversions you had undertaken on the way there. How?

Two types of sensory information are required to update where we think we are: Allothetic information we get from the outside environment, like that ‘Pub’ sign. Idiothetic information is generated by the body itself. For example, the brain signals responsible for walking to the pub also provide us with information about how far we’ve walked so far. Path integration uses these idiothetic cues to transverse the mental map in our head. For example, a mouse that is foraging and takes a long, winding trajectory towards its goal will have ‘calculated’ its displacement from its nest and can make a beeline safely back if it suddenly runs into a fox.

mouse integration

Studying path integration is complicated by the fact that it’s only one part of the mechanism behind navigation. It’s an error-prone system, and so it works in combination with other information to create a path - for example, a mouse might use remembered landmarks, olfaction and even way-marking to navigate through an environment. Studying the neurons specifically involved in path integration therefore requires us to remove all other possibilities but using path integration. How do we do that?

pi 1

One way of studying path integration is using virtual reality. Mice run through a VR corridor on a treadmill until there’s a visual cue. If they stop at the visual cue, they get a treat. After a few repetitions, the visual cue is removed - yet the mice still stop in the place where the visual cue would have been. If there are no external cues for the mice to see, how do they know when to stop?

The mice can’t use other navigational strategies, such as beaconing, because there’s no allothetic cues, so they’re left with path integration. They might have remembered the time it took to run to the reward zone - but when the speed of the treadmill was changed, the mice didn’t overshoot the reward zone, despite running faster. This implies they weren’t measuring the time, but were instead getting their information from a collection of self-motion cues. This includes proprioception (the sense of your self-position and self-movement) along with the vestibular system (your inner ear, which provides a sense of balance and awareness of our head and body in space) and motor efference. Over longer distances, the accuracy of this system drops without external input such as landmarks, as small errors start to accumulate and the mice start to stop further away from the reward zone.

pi 1

Cell types

Path integration is just one of the many navigational strategies we want to study to understand the basis of the neural map. There are many specialised cell types involved in navigation. For the purpose of this website, let’s focus on three: place cells, grid cells, and head direction cells.

cell types

Grid cells are place-modulated neurons located primarily in the entorhinal cortex that fire periodically in space, mapping a triangular grid across an environment. As an animal moves through space, grid cells fire to create hexagonal patterns that allow us to map this space. Again, a population of grid cells alone can encode a spatial map

Place cells located in the hippocampus, fire when an animal enters a specific location in space. A population of place cells alone can encode a spatial map.

Head direction cells are primarily found in the postsubiculum, and provide directional information by preferentially firing in specific directions. A population of head direction cells can encode which direction you are facing in your spatial map.

These cells potentially have uses outside of encoding physical space - their coding mechanism may be used for more general problem sets, such as cognitive mapping. Cognitive mapping refers to swapping out the three-dimensional world we interact with for a different, continuous dimension that represents an abstract concept. The same way we might encode the map of our room, we might use the hexagonal-firing properties of grid cells to represent, for example, conceptual spaces, such as hierarchically organizing ideas within our brain, or recording temporal sequences of events.

References

2022

  1. Spatial representation by ramping activity of neurons in the retrohippocampal cortex
    Sarah A. Tennant ,  Harry Clark ,  Ian Hawes , and 6 more authors
    Current Biology, Oct 2022
\ No newline at end of file + Spatial Memory | Nolan & Sürmeli Labs

Spatial Memory

The Neural Map

Our sense of navigation is not something we tend to consider too deeply in our day-to-day lives. Nevertheless, we somehow have a subconscious knowledge of were we are in the world and where we are going. We use it everywhere - to walk home, to go to the grocery store, to walk to the bathroom in the middle of the night.

How we navigate is largely dependent on what information is available to us. The brain is complex and uses a variety of strategies to determine our position, relative to other landmarks and to start or goal locations. Among those is beaconing, a strategy where we use a distant object to navigate to - say, “Hey, there’s a neon sign here saying ‘Pub’” - that’s probably the entrance to the pub. But what if you don’t have those clues?


Path Integration

Say you are out on a hike with your friends. You are full of energy during the day and make plenty of detours before you get to the campsite and settle in for a restful night, listening to the soothing patter of rain on the tent. Oh no! It is 2am and you have woken up in what an optimist would call a ‘puddle’, and anyone else might label ‘a decidedly sizable body of water’.

So much for the guy on Facebook Marketplace promising that the tent was waterproof.

You and your friend make the decision to trek back to the car park. You wish one of you had brought a headlamp at least - you can not use any landmarks to navigate. Despite that, you somehow manage to make it back in one piece, going straight from the campsite to the carpark without any of the diversions you had undertaken on the way there. How?

Two types of sensory information are required to update where we think we are: Allothetic information we get from the outside environment, like that ‘Pub’ sign. Idiothetic information is generated by the body itself. For example, the brain signals responsible for walking to the pub also provide us with information about how far we have walked so far. Path integration uses these idiothetic cues to transverse the mental map in our head. For example, a mouse that is foraging and takes a long, winding trajectory towards its goal will have ‘calculated’ its displacement from its nest and can make a beeline safely back if it suddenly runs into a fox.

mouse integration

Studying path integration is complicated by the fact that it’s only one part of the mechanism behind navigation. It’s an error-prone system, and so it works in combination with other information to create a path - for example, a mouse might use remembered landmarks, olfaction and even way-marking to navigate through an environment. Studying the neurons specifically involved in path integration therefore requires us to remove all other possibilities but using path integration. How do we do that?

pi 1

One way of studying path integration is using virtual reality. Mice run through a VR corridor on a treadmill until there is a visual cue. If they stop at the visual cue, they get a treat. After a few repetitions, the visual cue is removed - yet the mice still stop in the place where the visual cue would have been. If there are no external cues for the mice to see, how do they know when to stop?

The mice can not use other navigational strategies, such as beaconing, because there are no allothetic cues, so they are left with path integration (Tennant et al., 2022). They might have remembered the time it took to run to the reward zone - but when the speed of the treadmill was changed, the mice did not overshoot the reward zone, despite running faster. This implies they were not measuring the time, but were instead getting their information from a collection of self-motion cues. This includes proprioception (the sense of your self-position and self-movement) along with the vestibular system (your inner ear, which provides a sense of balance and awareness of our head and body in space) and motor efference. Over longer distances, the accuracy of this system drops without external input such as landmarks, as small errors start to accumulate and the mice start to stop further away from the reward zone.

pi 1

Cell types

Path integration is just one of the many navigational strategies we want to study to understand the basis of the neural map. There are many specialised cell types involved in navigation. For the purpose of this website, let us focus on three: place cells, grid cells, and head direction cells.

cell types

Grid cells are place-modulated neurons located primarily in the entorhinal cortex that fire periodically in space, mapping a triangular grid across an environment. As an animal moves through space, grid cells fire to create hexagonal patterns that allow us to map this space. Again, a population of grid cells alone can encode a spatial map (Clark & Nolan, 2024).

Place cells located in the hippocampus, fire when an animal enters a specific location in space. A population of place cells alone can encode a spatial map.

Head direction cells are primarily found in the postsubiculum, and provide directional information by preferentially firing in specific directions. A population of head direction cells can encode which direction you are facing in your spatial map (Gerlei et al., 2020).

These cells potentially have uses outside of encoding physical space - their coding mechanism may be used for more general problem sets, such as cognitive mapping. Cognitive mapping refers to swapping out the three-dimensional world we interact with for a different, continuous dimension that represents an abstract concept. The same way we might encode the map of our room, we might use the hexagonal-firing properties of grid cells to represent, for example, conceptual spaces, such as hierarchically organizing ideas within our brain, or recording temporal sequences of events.

References

2024

  1. Task-anchored grid cell firing is selectively associated with successful path integration-dependent behaviour
    Harry Clark ,  and  Matthew F. Nolan
    eLife, Mar 2024

2022

  1. Spatial representation by ramping activity of neurons in the retrohippocampal cortex
    Sarah A. Tennant ,  Harry Clark ,  Ian Hawes , and 6 more authors
    Current Biology, Oct 2022

2020

  1. Grid cells are modulated by local head direction
    Klara Gerlei ,  Jessica Passlack ,  Ian Hawes , and 4 more authors
    Nature Communications, Aug 2020
\ No newline at end of file diff --git a/research/index.html b/research/index.html index 88edd82d8078..1e30d3bd15a9 100644 --- a/research/index.html +++ b/research/index.html @@ -1 +1 @@ - Research | Nolan & Sürmeli Labs

Research

We aim to understand the cellular and molecular basis for computations that underlie episodic memory.

\ No newline at end of file + Research | Nolan & Sürmeli Labs

Research

We aim to understand the cellular and molecular basis for computations that underlie episodic memory.

\ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index e91b82d665c7..15d4df47d61b 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ - https://mattnolanlab.github.io/news/announcement_1/ 2024-03-14T19:35:00+00:00 https://mattnolanlab.github.io/news/HarryPaper/ 2024-03-28T22:42:00+00:00 https://mattnolanlab.github.io/blog/2015/formatting-and-links/ 2015-03-15T16:40:16+00:00 https://mattnolanlab.github.io/blog/2015/images/ 2015-05-15T21:01:00+00:00 https://mattnolanlab.github.io/blog/2015/code/ 2015-07-15T15:09:00+00:00 https://mattnolanlab.github.io/blog/2015/math/ 2015-10-20T15:12:00+00:00 https://mattnolanlab.github.io/blog/2015/disqus-comments/ 2015-10-20T15:59:00+00:00 https://mattnolanlab.github.io/blog/2020/twitter/ 2020-09-28T15:12:00+00:00 https://mattnolanlab.github.io/blog/2020/github-metadata/ 2020-09-28T21:01:00+00:00 https://mattnolanlab.github.io/blog/2021/distill/ 2021-05-22T00:00:00+00:00 https://mattnolanlab.github.io/blog/2021/diagrams/ 2021-07-04T17:39:00+00:00 https://mattnolanlab.github.io/blog/2022/redirect/ 2022-02-01T17:39:00+00:00 https://mattnolanlab.github.io/blog/2022/giscus-comments/ 2022-12-10T15:59:00+00:00 https://mattnolanlab.github.io/blog/2023/table-of-contents/ 2023-03-20T15:59:00+00:00 https://mattnolanlab.github.io/blog/2023/tables/ 2023-03-20T18:37:00+00:00 https://mattnolanlab.github.io/blog/2023/videos/ 2023-04-24T21:01:00+00:00 https://mattnolanlab.github.io/blog/2023/audios/ 2023-04-25T10:25:00+00:00 https://mattnolanlab.github.io/blog/2023/sidebar-table-of-contents/ 2023-04-25T14:14:00+00:00 https://mattnolanlab.github.io/blog/2023/custom-blockquotes/ 2023-05-12T19:53:00+00:00 https://mattnolanlab.github.io/blog/2023/jupyter-notebook/ 2023-07-04T12:57:00+00:00 https://mattnolanlab.github.io/blog/2023/post-bibliography/ 2023-07-12T13:56:00+00:00 https://mattnolanlab.github.io/blog/2023/tikzjax/ 2023-12-12T22:25:00+00:00 https://mattnolanlab.github.io/blog/2024/chartjs/ 2024-01-26T01:04:00+00:00 https://mattnolanlab.github.io/blog/2024/echarts/ 2024-01-26T16:03:00+00:00 https://mattnolanlab.github.io/blog/2024/geojson-map/ 2024-01-26T17:57:00+00:00 https://mattnolanlab.github.io/blog/2024/vega-lite/ 2024-01-27T00:20:00+00:00 https://mattnolanlab.github.io/blog/2024/advanced-images/ 2024-01-27T11:46:00+00:00 https://mattnolanlab.github.io/blog/2024/code-diff/ 2024-01-27T19:22:00+00:00 https://mattnolanlab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/ 2022-04-23T23:20:09+00:00 https://mattnolanlab.github.io/projects/high_speed_imaging/ 2024-04-13T13:30:02+00:00 https://mattnolanlab.github.io/projects/spatial_memory/ 2024-04-13T13:30:02+00:00 https://mattnolanlab.github.io/ https://mattnolanlab.github.io/data-and-code/ https://mattnolanlab.github.io/news/ https://mattnolanlab.github.io/nolan-lab/ https://mattnolanlab.github.io/surmeli-lab/ https://mattnolanlab.github.io/publications/ https://mattnolanlab.github.io/research/ https://mattnolanlab.github.io/lab-values/ https://mattnolanlab.github.io/blog/tag/formatting/ https://mattnolanlab.github.io/blog/tag/links/ https://mattnolanlab.github.io/blog/tag/images/ https://mattnolanlab.github.io/blog/tag/code/ https://mattnolanlab.github.io/blog/tag/math/ https://mattnolanlab.github.io/blog/tag/comments/ https://mattnolanlab.github.io/blog/tag/metadata/ https://mattnolanlab.github.io/blog/tag/distill/ https://mattnolanlab.github.io/blog/tag/diagrams/ https://mattnolanlab.github.io/blog/tag/toc/ https://mattnolanlab.github.io/blog/tag/tables/ https://mattnolanlab.github.io/blog/tag/videos/ https://mattnolanlab.github.io/blog/tag/audios/ https://mattnolanlab.github.io/blog/tag/sidebar/ https://mattnolanlab.github.io/blog/tag/blockquotes/ https://mattnolanlab.github.io/blog/tag/jupyter/ https://mattnolanlab.github.io/blog/tag/bib/ https://mattnolanlab.github.io/blog/tag/charts/ https://mattnolanlab.github.io/blog/tag/maps/ https://mattnolanlab.github.io/blog/category/sample-posts/ https://mattnolanlab.github.io/blog/category/external-services/ https://mattnolanlab.github.io/blog/2015/ https://mattnolanlab.github.io/blog/2020/ https://mattnolanlab.github.io/blog/2021/ https://mattnolanlab.github.io/blog/2022/ https://mattnolanlab.github.io/blog/2023/ https://mattnolanlab.github.io/blog/2024/ https://mattnolanlab.github.io/lighthouse_results/desktop/alshedivat_github_io_al_folio_.html 2024-04-13T13:29:31+00:00 https://mattnolanlab.github.io/lighthouse_results/mobile/alshedivat_github_io_al_folio_.html 2024-04-13T13:29:31+00:00 \ No newline at end of file + https://mattnolanlab.github.io/news/announcement_1/ 2024-03-14T19:35:00+00:00 https://mattnolanlab.github.io/news/HarryPaper/ 2024-03-28T22:42:00+00:00 https://mattnolanlab.github.io/blog/2015/formatting-and-links/ 2015-03-15T16:40:16+00:00 https://mattnolanlab.github.io/blog/2015/images/ 2015-05-15T21:01:00+00:00 https://mattnolanlab.github.io/blog/2015/code/ 2015-07-15T15:09:00+00:00 https://mattnolanlab.github.io/blog/2015/math/ 2015-10-20T15:12:00+00:00 https://mattnolanlab.github.io/blog/2015/disqus-comments/ 2015-10-20T15:59:00+00:00 https://mattnolanlab.github.io/blog/2020/twitter/ 2020-09-28T15:12:00+00:00 https://mattnolanlab.github.io/blog/2020/github-metadata/ 2020-09-28T21:01:00+00:00 https://mattnolanlab.github.io/blog/2021/distill/ 2021-05-22T00:00:00+00:00 https://mattnolanlab.github.io/blog/2021/diagrams/ 2021-07-04T17:39:00+00:00 https://mattnolanlab.github.io/blog/2022/redirect/ 2022-02-01T17:39:00+00:00 https://mattnolanlab.github.io/blog/2022/giscus-comments/ 2022-12-10T15:59:00+00:00 https://mattnolanlab.github.io/blog/2023/table-of-contents/ 2023-03-20T15:59:00+00:00 https://mattnolanlab.github.io/blog/2023/tables/ 2023-03-20T18:37:00+00:00 https://mattnolanlab.github.io/blog/2023/videos/ 2023-04-24T21:01:00+00:00 https://mattnolanlab.github.io/blog/2023/audios/ 2023-04-25T10:25:00+00:00 https://mattnolanlab.github.io/blog/2023/sidebar-table-of-contents/ 2023-04-25T14:14:00+00:00 https://mattnolanlab.github.io/blog/2023/custom-blockquotes/ 2023-05-12T19:53:00+00:00 https://mattnolanlab.github.io/blog/2023/jupyter-notebook/ 2023-07-04T12:57:00+00:00 https://mattnolanlab.github.io/blog/2023/post-bibliography/ 2023-07-12T13:56:00+00:00 https://mattnolanlab.github.io/blog/2023/tikzjax/ 2023-12-12T22:25:00+00:00 https://mattnolanlab.github.io/blog/2024/chartjs/ 2024-01-26T01:04:00+00:00 https://mattnolanlab.github.io/blog/2024/echarts/ 2024-01-26T16:03:00+00:00 https://mattnolanlab.github.io/blog/2024/geojson-map/ 2024-01-26T17:57:00+00:00 https://mattnolanlab.github.io/blog/2024/vega-lite/ 2024-01-27T00:20:00+00:00 https://mattnolanlab.github.io/blog/2024/advanced-images/ 2024-01-27T11:46:00+00:00 https://mattnolanlab.github.io/blog/2024/code-diff/ 2024-01-27T19:22:00+00:00 https://mattnolanlab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/ 2022-04-23T23:20:09+00:00 https://mattnolanlab.github.io/projects/high_speed_imaging/ 2024-04-13T14:21:42+00:00 https://mattnolanlab.github.io/projects/spatial_memory/ 2024-04-13T14:21:42+00:00 https://mattnolanlab.github.io/ https://mattnolanlab.github.io/data-and-code/ https://mattnolanlab.github.io/news/ https://mattnolanlab.github.io/nolan-lab/ https://mattnolanlab.github.io/surmeli-lab/ https://mattnolanlab.github.io/publications/ https://mattnolanlab.github.io/research/ https://mattnolanlab.github.io/lab-values/ https://mattnolanlab.github.io/blog/tag/formatting/ https://mattnolanlab.github.io/blog/tag/links/ https://mattnolanlab.github.io/blog/tag/images/ https://mattnolanlab.github.io/blog/tag/code/ https://mattnolanlab.github.io/blog/tag/math/ https://mattnolanlab.github.io/blog/tag/comments/ https://mattnolanlab.github.io/blog/tag/metadata/ https://mattnolanlab.github.io/blog/tag/distill/ https://mattnolanlab.github.io/blog/tag/diagrams/ https://mattnolanlab.github.io/blog/tag/toc/ https://mattnolanlab.github.io/blog/tag/tables/ https://mattnolanlab.github.io/blog/tag/videos/ https://mattnolanlab.github.io/blog/tag/audios/ https://mattnolanlab.github.io/blog/tag/sidebar/ https://mattnolanlab.github.io/blog/tag/blockquotes/ https://mattnolanlab.github.io/blog/tag/jupyter/ https://mattnolanlab.github.io/blog/tag/bib/ https://mattnolanlab.github.io/blog/tag/charts/ https://mattnolanlab.github.io/blog/tag/maps/ https://mattnolanlab.github.io/blog/category/sample-posts/ https://mattnolanlab.github.io/blog/category/external-services/ https://mattnolanlab.github.io/blog/2015/ https://mattnolanlab.github.io/blog/2020/ https://mattnolanlab.github.io/blog/2021/ https://mattnolanlab.github.io/blog/2022/ https://mattnolanlab.github.io/blog/2023/ https://mattnolanlab.github.io/blog/2024/ https://mattnolanlab.github.io/lighthouse_results/desktop/alshedivat_github_io_al_folio_.html 2024-04-13T14:21:17+00:00 https://mattnolanlab.github.io/lighthouse_results/mobile/alshedivat_github_io_al_folio_.html 2024-04-13T14:21:17+00:00 \ No newline at end of file