Home » Biology » Neurobiology

Category Archives: Neurobiology

Want to Get Involved In Research?

[su_heading size="15" margin="0"]The BioInnovation Group is an undergraduate-run research organization aimed at increasing undergraduate access to research opportunities. We have many programs ranging from research project teams to skills training (BIG-RT) and Journal Club.

If you are an undergraduate interested in gaining research experience and skills training, check out our website (https://bigucd.com/) to see what programs and opportunities we have to offer. In order to stay up to date on our events and offerings, you can sign up for our newsletter. We look forward to having you join us![/su_heading]

Newest Posts

Making Brain Stimulation a Mainstream Treatment for Aphasia

By Eva Clubb, Cognitive Science, ’21

Author’s Note: I started research on aphasia for an upper-division writing class, and was intrigued by the potential of brain stimulation as an effective and practical treatment option for aphasia, with potential to treat other brain disorders. Finding an intersection between neuroscience, technology, and linguistics is critical to broaden speech therapy treatment. I hope readers will be excited by the possibilities that advancements in neuroscience have in healthcare.

 

Forms of “language” are found in many kinds of species. This suggests an innate desire for communication and a neural predisposition to do so. The human brain is specialized for spoken language, such that acute damage or acute stimulation to language processing areas can modulate a person’s fluency of speech. From having a heart-to-heart with a good friend, chatting with a local barista, or updating your doctor on your symptoms: verbal communication mediates a casual relationship between your inner world and the outer one. Wants, needs, worries, and tiny little things you just have to get off your chest can be articulated without second thought. 

Now, imagine being stripped of the ability to communicate with the outer world. An individual’s increasing forgetfulness or tendency to stumble over their words might not be indicative of normal cognitive decline as a result of aging. Instead, subtle decreases in fluency might be symptoms of aphasia: a common and incurable language disorder, usually an effect of underlying brain damage. People with aphasia are perfectly intelligent and cognizant, but are limited in their ability to read, speak, and understand language. 

Aphasia affects almost 2 million Americans; nearly a third of those who experienced a stroke develop aphasia because of damaged language processing areas in the brain [2]. Aphasia refers to language impairments that disrupt the ability to access ideas and thoughts through language; it does not disrupt the ideas and thoughts themselves. The onset of aphasia is associated with brain damage, developed concurrently with a disease like Dementia or following an injury like a stroke. Patients with aphasia may have trouble verbalizing their thoughts in several ways: jumbling words together, speaking gibberish, substituting one word for another, poor grammar, speaking in short sentences, trouble retrieving words or names. 

Gradual yet consistent symptom improvements are typically nurtured by a speech language pathologist. To increase working vocabulary, common speech therapy techniques include conversational therapy, word finding, and naming tasks during weekly or bi-weekly sessions over the course of months or years. Although prior research in stroke patients suggest that increasing duration of sessions, sessions per week and the number of weeks of speech therapy are best for improving speech, the inconvenience and cost is a significant impediment to its implementation [3]. So how can people learn and retain more linguistic information at a faster rate? This is where neuroscientists enter the conversation. Non-invasive brain stimulation can mimic and amplify the effects of activities we regard as ‘mentally stimulating.’

Non-invasive imaging technology can pinpoint brain areas associated with speech impairments, and applying acute electrical current triggers neural changes associated with language learning. The neural changes as a result of therapy can be observed either through functional neuroimaging, like fMRI, revealing the areas of activation, or by quantifying changes in brain structure, in the number and volume of fiber bundles in processing areas [4]. Neural connections, or synapses, are the physical mechanism of learning. When synaptic connections become vulnerable due to the applied electrical currents, having higher neuroplasticity, learning occurs most rapidly. Often neuroplasticity is cited in the context of infants whose brains develop at an incredible rate. However, any experience, task, or event will modify the connections in your brain. Therefore, artificially increasing neuroplasticity through electrical brain stimulation amplifies the effects of a learning experience. It is unclear whether brain stimulation induces true ‘learning’, or the reactivation of dormant, inaccessible information [4]. 

In the context of aphasia, brain stimulation is paired with conventional speech therapy tasks to accelerate the rate of language improvements. Brain stimulation can be performed at little cost to patients, typically through transcranial direct current stimulation (tDCS). tDCS involves placing two electrodes on the surface of the scalp and sending a small electrical current to the brain. 

Figure 1: Two electrodes placed at the surface of the scalp deliver a small electrical current. Common targets are language processing centers Wernicke’s area and Broca’s area.

 

The process is completely non-invasive, described by patients as a tingly feeling. tDCS treatment is in the experimental stages for other types of disorders where brain abnormalities are relatively localized and widely researched, including depression and anxiety. 

Neuroscientists have identified and experimented with tDCS to major language processing areas in the brain, where damage is linked to aphasia. Within the left Inferior Frontal Gyrus, ‘Broca’s area’ is primarily responsible for speech production. Damage to Broca’s area causes trouble naming and may restrict speech to short, ungrammatical sentences. Research in stroke patients shows combining tDCS to Broca’s area with word-repetition tasks improves accuracy in speech production, while conversational therapy enhances picture, noun, and verb naming [2]. Likewise, ‘Wernicke’s area’ in the Superior Temporal Gyrus is associated with speech comprehension. Predictably, impairments to Wernicke’s area cause deficits in understanding others’ speech and writing. Researchers found the combined stimulation of this area and speech therapy improved verbal comprehension. While Broca’s and Wernicke’s area have been thoroughly investigated, several studies have used MRI to locate brain damage before experimenting with the application sites of tDCS. Other studies have applied tDCS to unconventional brain areas with mixed effects. Stimulation to the cerebellum has enhanced spelling ability, while stimulation to the primary motor cortex improved naming ability on trained words [5]. 

The effectiveness of tDCS is verifiable through controlled clinical trials. ‘Sham-tDCS’ acts as a placebo: it produces a tingling sensation in the scalp without affecting neural functions, leading patients to believe they were receiving true tDCS. In many instances, the groups receiving sham-tDCS showed some improvements, although not as significant as the group receiving true-tDCS. This could be attributed to effective speech therapy tasks themselves or a mild placebo effect. Regardless, the superiority of the treatment group was exacerbated over time: after 6 months the sham-tDCS group showed a significant decrease in the initial improvements made [5]. This suggests tDCS not only promotes learning but is important for long-term maintenance. 

Scientists’ understanding of the short-term and long-term impacts of tDCS, the specifications of the treatment, and how exactly speech therapy modifies the brain is growing. Unanswered questions remain about the parameters and expected outlook for the treatment. What is the best combination of speech therapy tasks and stimulation sites? How many sessions of tDCS are necessary? How long should electrical current be applied for? What kind of results should be expected? How should you expect conversational ability to change? 

Tentative answers to these questions establish tDCS as a feasible supplement to speech therapy. Knowledge about brain structure and linguistic function create an interdisciplinary approach to aphasia treatment. Modern treatments like tDCS are enabled by technology and mitigate the time and cost barrier of intensive speech therapy. Healthcare workers and neuroscientists, and patients might benefit from investigating how to make brain stimulation a mainstream treatment for aphasia.

 

References: 

  1. Chomsky, N. “On the Biological Basis of Language Capacities.” In The Neuropsychology of Language: Essays in Honor of Eric Lenneberg, edited by R.W. Rieber, 1-24. Boston, MA: Springer. 1976
  2.  Biou, Cassoudesalle, Cogne, Sibon, De Gabory, Dehail, Aupy, Glize. 2019. Transcranial direct current stimulation in post-stroke aphasia rehabilitation: A systematic review. Ann Phys Rehabil Med . 2019 Mar;62(2):104-121. doi: 10.1016/j.rehab.2019.01.003
  3. Breitenstein, Grewe, Flöel , Ziegler, Springer, Martus, Huber, Willmes, Ringelstein, Haeusler, Abel, Glindemann, Domahs, Regenbrecht, Schlenck, Thomas, Obrig, Ernst de Langen, Rocker, Wigbers, Rühmkorf, Hempen, List, Baumgaertner. 2017. Intensive speech and language therapy in patients with chronic aphasia after stroke: a randomised, open-label, blinded-endpoint, controlled trial in a health-care setting. Lancet. 389(10078):1528-1538. doi: 10.1016/S0140-6736(17)30067-3.
  4. Crosson B, Rodriguez AD, Copland D, Fridriksson J, Krishnamurthy LC, Meinzer M, Raymer AM, Krishnamurthy V, Leff AP. 2019. Neuroplasticity and aphasia treatments: new approaches for an old problem. J Neurol Neurosurg Psychiatry. 90(10): 1147–1155. doi:10.1136/jnnp-2018-319649.
  5. Meinzer M, Darkow R, Lindenberg R, Floel A. 2016. Electrical stimulation of the motor cortex enhances treatment outcome in post-stroke aphasia. Brain. 139(Pt 4):1152-63. doi: 10.1093/brain/aww002

Review of recent progress in development of genetically encoded calcium indicators in imaging neural activity

By Lia Freed-Doerr, Cognitive Science, Neurobiology, Physiology & Behavior ‘22

Author’s Note: In fall quarter, I got into contact with the Tian lab in the Department of Biochemistry and Molecular Medicine in order to learn more about optogenetic techniques and the difficulties of in vivo sensing of neural dynamics and, with the mentorship of a postdoctoral researcher, I have learned more about different high-resolution sensors (or indicators) and expanded my interests to genetically encoded sensors of cellular dynamics. As I began learning about various types of imaging sensors, calcium ion (Ca2+) indicators, in particular, stuck out to me due to their variety and depth of development. As I am unable to take part in in-person projects due to COVID restrictions, to ensure my understanding of the topics I was reading about, I began to write this review.

 

Abstract

Methods of performing neuroscience research have progressed remarkably in recent years, providing answers to many different types of questions. Genetically encoded indicators (sensors) are of particular interest for use in answering questions about neural circuits, cell specific populations, and single cell dynamics. These indicators modulate their fluorescence in different cellular environments and allow for optical observations. Of the various cellular activities that can be measured by genetically encoded indicators, the dynamics of calcium ions (Ca2+) are of interest due to their fundamental importance in neuronal signaling. In this review, we introduce the basic underlying features of genetically encoded calcium indicators (GECIs) including characteristics of fluorescence, an overview of GECI engineering, and a brief discussion of some common variants of GECIs and their uses.

Introduction

Modern neuroscientists have found many ways to analyze the information-carrying neuronal circuits and dynamics within the brain. The continued development of genetically encoded optical indicators, specifically Ca2+ indicators, is particularly promising for analyses of single neurons or neural circuits. Optical fluorescence imaging allows large populations of neurons to be examined simultaneously and avoids major damage to the cells of interest [1]. In particular, measuring the dynamics of Ca2+ can be useful in inferring spiking activity in neurons, as Ca2+ is involved in neuronal action potentials. In this review, we will introduce the basic workings of genetically encoded sensors beginning with a ground-up introduction of fluorescence measurements and the process of engineering genetically encoded sensors. Several Ca2+ indicators will be briefly discussed in order to examine recent progress and how this can impact future studies of the brain.

Mechanics of Fluorescence Indicators

Fluorescence imaging is a valuable tool for visualizing populations of cells. It is relatively non-invasive; but, in order to use optical tools to study the cortex, some surgical procedures must still be performed. A cranial window might be installed in the animal to shine light through; alternatively, an endoscope or fiber optic cable could be installed at the desired depth within brain tissue [2].

Fluorescent proteins (FPs) internally form a barrel-like structure containing the chromophore (also known as the fluorophore), which is a trio of amino acids responsible for the protein’s fluorescence.  The chromophore is autocatalytically formed as a post-translational modification, requiring just atmospheric oxygen. Genetically encoded indicators rely on a change in the chromophore environment within the labeling FP. Fluorescence is observed when light of an appropriate wavelength excites the chromophore’s electrons, which then results in the emission of a lower energy photon as the excited electrons return to a lower energy state. FPs are often connected to sensing domains, which induce the change in the chromophore environment after detecting the event of interest (e.g., Ca2+ binding for GECIs). Any number of cellular activities may induce a conformational change such as changes in pH or the binding of a ligand. Sensors can have one or two FPs with partially overlapping fluorescence spectra. Single FP-based sensors are generally preferred as indicators; the green fluorescent protein (GFP), cloned from the jellyfish Aequorea Victoria,  is the most commonly used FP for single FP sensors [3]. In systems with two FPs, a Förster or fluorescence resonance energy transfer (FRET) occurs. FRET involves an energy transfer from the higher energy (more blue-shifted) donor FP to the lower energy (more red-shifted) acceptor FP. Genetically manipulating FP systems by circularly permuting FPs (fusing the original termini of the FP and introducing a new opening closer to the chromophore) can improve their performance in sensors by making the chromophore more accessible to the outside environment and, thus, more susceptible to environmental changes [4]. FPs like GFPs are also typically oligomeric in their natural environment (i.e. multiple copies stick together); but, in order to help prevent breakdown and allow for better combination with sensing domains in indicators, FPs must also be mutated to become monomeric [5].

Figure 1: A Jablonski diagram that visualizes an electron’s excitation to a higher energy level by absorption of a photon and subsequent fluorescence emission with energy decay.

To image fluorescent systems, we can use fluorescence microscopy with one or multiple photons (Fig. 2) [2]. In one-photon systems, the fluorophore absorbs energy from a light source and is excited by a single photon. Some energy is lost non-radiatively (without light) resulting in the emittance of lower energy, visible photons from the fluorophore. One-photon systems are relatively inexpensive and fast but can only penetrate tissue to a shallow depth. In contrast, multi-photon microscopy shows more promise for in vivo imaging because of its reduced out of focus emission, light scattering, and phototoxicity. The combination of the energy of multiple photons is required for excitation in such systems.

Figure 2: A diagram outlining the setup for a standard fluorescence microscopy experiment.

Genetic encoding of sensors

To introduce indicator genes into a system, methods like in utero electroporation or viral vectors can be used [6, 7]. DNA promoters or localization sequences can be used to target specific subtypes of neurons in organisms to produce transgenic animals. Transgenic animal genomes that have been modified by artificial bacterial chromosomes, CRISPR, or effector nucleases, and are particularly useful when longitudinal and intensive sampling is required. Genetic changes can be maintained throughout an animal’s lifespan and lines of transgenic animals can be bred for further testing [2]. A recombinase system administered via viral vector, like the popular Cre/loxP system, can be used to achieve high specificity [6]. In the Cre/loxP system, the loxP sequences are placed at specific target sites of genomic DNA. The Cre-recombinase protein can then target loxP sequences to modify the genetic sequence. Two mouse lines, one carrying the gene of interest flanked by loxP sequences, and the other line expressing Cre-recombinase, can be bred to produce mice expressing the gene of interest. The Cre Driver mouse line expressing Cre-recombinase can be designed to only express the gene under certain conditions. To apply Cre/loxP to genetically encoded indicator systems, a viral vector injects the indicator genes into the brain cells of a Cre Driver mouse. The indicator is only expressed where the Cre-recombinase is active. Expression would continue through one animal’s lifetime; to create a line of mice that express the desired indicator, other methods must be used [6]. Through recombinase methods, the development of transgenic animal lines is an area of active improvement.

There are several advantages to genetically encoding indicators over other methods of imaging. There are a wide variety of neuronal events that can be observed by constructing indicators from proteins that respond to cellular events, including changes in neurotransmitter concentrations, transmembrane voltage, Ca2+ dynamics, and pH [1]. Genetic encoding also allows for selective sampling of cells based on genotype. Selective sampling is not possible with chemical dyes, nor is the viewing of the evolution of neuronal dynamics during learning or development processes [8, 1]. Similar to chemical dyes, genetically encoded indicators allow for the imaging of brain activity in neurons in vitro and in animals [9]. Neurons have the machinery implanted within them to automatically report cellular dynamics of interest.

There are several different broad classes of genetically encoded indicators that are based on the dynamics of the action potential [7]. Genetically encoded voltage indicators (GEVIs) operate based on the membrane depolarization that occurs during action potentials. Other indicators, like pH and neurotransmitter sensors, detect vesicular release. Genetically encoded pH sensors (GEPIs) react to the decrease in acidity as vesicles fuse with the membrane, and genetically encoded transmitter indicators (GETIs) are used to visualize the release of neurotransmitters into the synapse [1]. Genetically encoded calcium indicators (GECIs) operate based on the rise in cytosolic Ca2+ during an action potential; however, they do not directly measure spiking activity. When an action potential occurs, Ca2+ floods into the cell. Ca2+ influx is important because calcium ions are crucial for the release of neurotransmitters from vesicles, which then go on to produce signals in other neurons. More mild calcium ion dynamics are always present in neurons, even in a resting state. Among these various classes, GECIs have been perhaps some of the most developed of these indicators and, thus, some of the most promising.

Engineering genetically-encoded calcium indicators

Performance Criteria

As GECIs are engineered, many performance criteria must be considered. Tradeoffs often occur between the various important qualities of an indicator’s performance [10]. As we optimize the indicator to produce a desired result in one criterion, another criterion often decreases in quality. Thus, development of sensors optimized for specific applications is continuous. Some of these criteria are affinity, sensitivity, kinetics, localization, and photophysical characteristics.

Affinity, represented as the dissociation constant Kd, describes what percentage of the indicator is unbound given a particular concentration of ligand.

Specificity refers to the indicator system’s ability to respond only to the target of interest, as opposed to perhaps similar molecules.

Sensitivity is usually represented by ΔF/F0, the fractional fluorescence change, which is the fluorescence signal change over a change in concentration of the target molecule. It can also be represented by signal-to-noise ratio (SNR), the relative difference between the signal of interest and background noise.

Kinetics is the rate of change in fluorescence intensity of the indicator in response to the change in ligand concentration. There tends to be a tradeoff between affinity and kinetics [8].

Photophysical qualities like brightness, photostability, and photoswitching behaviors are also important considerations. In general, brighter or more intensely emitting indicators are desired. Photostability is inversely proportional to the rate of photobleaching (the damaging of the FP so that it becomes unable to fluoresce). Additionally, some indicators have broader ranges of excitation than others, or may change their intensity or sensitivity in different light conditions, which would limit usage.

GECIs are some of the most widely used genetically encoded indicators in vivo because of their relatively high SNR and improved properties like brightness, photostability, and dynamic range [2]. However, there are still numerous obstacles to be faced in designing GECIS, and only certain variants have faced success in vivo.

Engineering GECIs

Genetically encoded indicators generally are composed of an analyte-binding (sensing domain) and a fluorescent protein (reporting domain), though there are additional peptide complexes that assist in changing the conformation of the system [2]. Upon the occurrence of a sensing event, the sensing domain undergoes a conformational change which, in turn, induces a conformational change in the FP, resulting in fluorescent activity. Engineers of GECIs use two different strategies for constructing reporting domains: FRET-based indicators and single FP-based indicators [7]. When Ca2+ binds to FRET-based indicators, the spatial relationship between the donor and acceptor FPs changes so that there is a transfer of energy from the donor FP to the acceptor FP [2]. One family of indicators, Cameleon, has had some success. In this family, the sensing and peptide complex is located between two FPs with overlapping spectra. FRET-based indicators’ SNR tends to be lower, meaning it is harder to isolate the activity of a neuron from background noise. Because of these drawbacks, we mostly examine the engineering of the more commonly used single FP-based GECIs.

There are two popular designs among developers of single FP GECIs [8]. One is based on one of the earliest lines of calcium indicators, GCaMP. GCaMP consists of a circularly permuted green fluorescent protein (cpGFP) inserted between the Ca2+-binding protein, calmodulin (CaM), and another peptide called RS20, which binds CaM upon Ca2+ binding. When CaM binds Ca2+, a conformational change is induced in the cpGFP and the sensor fluoresces [10, 11]. Another recent design, the NTnC family of indicators, inserts a calcium-binding domain into a split FP [8]. Unlike GCaMP-type indicators, NTnC indicators display an inverted fluorescence response upon calcium binding (i.e., fluorescence decreases upon Ca2+ binding). They are less optimized than GCaMP variants, but it is hypothesized that their lesser Ca2+ binding capacity would  interfere less with normal calcium dynamics.

Figure 3: A basic representation of the GCaMP structure.

There have been efforts to expand the color variants of GECIs. In particular, there has been much effort to develop red-fluorescing GECIs because longer red wavelengths reduce phototoxicity and have better tissue penetration [12]. However, there have been many obstacles to producing red-fluorescing GECIs. Unlike GFP, inserting calcium binding domains into red fluorescent proteins (RFPs) disrupts folding and chromophore maturation [8]. A more popular design choice is to replace the GFP in a GCaMP-style indicator with an RFP and optimize the sensor for a new FP [1].

GECIs are improved iteratively through directed evolution and linker optimization between the cpFP and the sensing domains. Site-directed mutagenesis can be used to mutate specific locations to produce novel variants. In the development of one variant of GCaMP, mutations were specifically introduced in the calcium-binding domain-cpGFP linker in a GCaMP5 scaffold to increase sensitivity [11].  Using  directed evolution, mutations are randomly introduced. Then, upon testing for desired effects, the variants that produced the best results may be preserved and propagated. This process may repeat many times, producing increasingly successful indicators as the best-performing mutations survive across generations.

Challenges

Genetically encoding indicators, as a rule, comes with challenges. If we choose to use viral infection as our genetic encoding scheme, consideration must be taken to the many viral serotypes, which have varying levels of efficiency and can be toxic. Furthermore, in utero electroporation can be unpredictable, and transgenic animals may not express indicators at sufficiently high levels to be useful [1].

Sensors may affect the natural dynamics of their measured systems, affecting accuracy of results. GECIs, particularly GCaMP-based designs, may interfere with regular Ca2+ dynamics and gene expression [8]. This interference is likely due to interactions between the calcium-binding sensing domain with native proteins and the lack of availability of calcium once bound to the indicators. There have been efforts to improve and modify the calcium-binding domain so that it can bind fewer Ca2+ or otherwise improve affinity so that the indicator operates at lower concentrations of calcium.

There is also difficulty in using these indicators in vivo [2]. Especially in the mammalian brain, the SNR is highly decreased due to the amount of background noise. This reduced SNR is putting aside the level of breakdown that naturally happens in vivo vs. conditioned, cultured environments. Although many indicators have improved structural integrity in vivo, there are many that still cannot be used in living organisms.

Progress in GFP-based GECIs

There has been much development in the GCaMP series as variants are continuously improved by site-directed mutagenesis and computational design efforts [2]. The jGCaMP7 series, built from the GCaMP-6 series, provides a good example of optimization of indicators for different purposes: jGCaMP7f is optimized for fast kinetics, jGCaMP7s is optimized for high sensitivity (though it has slower kinetics), and jGCaMP7b is optimized to have a brighter baseline fluorescence [11]. All of these indicators are based on the same base scaffold but differ drastically in performance because of just a few mutations in the CaM-binding peptide, the GFP, the CaM domain, or the linkers between domains.

Progress in RFP-based GECIs

RFP-based GECIs have important advantages over GFP-based ones. Beyond the importance of color variety in tracking distinct populations of cells at once, red GECIs are also promising for reducing phototoxicity and allowing deeper imaging [12]. There are many promising RFP-based GECIs being developed, though they are generally dimmer than GFP-based indicators and may display photobleaching behaviors under blue light [1]. In particular, there are R-GECO1 variants like jRGECO1a, the RCaMP series, and, perhaps most promising, K-GECO1 [12]. There are three widely used RFPs from which red GECIs are developed; each red indicator family was generated from different RFPs. K-GECO1 has shown particular promise as it works at a distinct spectral range, allowing researchers to simultaneously work with other indicators in multicolor imaging experiments, and it also shows minimal fluorescent noise [9].

Designs of red GECIs often expand on the GCaMP design–for example, K-GECO1 follows a similar design of sandwiching the circularly permuted FP between the Ca2+-sensing domain, CaM, and a CaM-binding peptide [12]. Switching the GFP in GCaMP with an RFP comes with engineering challenges of linker optimization and preventing the breakdown of the sensor. The increased penetrative depth of red GECIs has been used to image subcortical areas like the hippocampus or medial prefrontal cortex relatively noninvasively, demonstrating the applicability of GECIs in neuroscience research [13].

There are other FP-based GECIs in development, but of particular interest is the development of near-infrared GECIs, whose spectral distinction from other indicators would help prevent photoswitching when used with optogenetic tools [8, 14].

Uses and Applications

The applications of GECIs are varied and powerful. The use of genetically encoded indicators allows for the analysis of cells of a specific type or subpopulation as they select for specific genetic qualities. The first transgenic mouse line expressing GCaMP2 in the cerebellar cortex was generated in 2007 and has allowed for characterization of certain synapses [6]. GECIs have been used to provide single-cell resolution to the decades-long study of various topographic maps in the brain and to track the communication of neural circuits [2]. In rats, GECIs have been used to monitor neural population behavior during motor learning tasks and observe the response of cells to sensory deprivation in the primary visual cortex after retinal lesion. They allow examination of ensemble and single cell-scale neural events at more and more temporally precise levels. Broadly, and perhaps more importantly, they are often used in conjunction with optogenetic and other experimental methods that allow for the inference of causation. In using these indicators, the stimulation techniques used in optogenetic experiments can also involve precise tracking of calcium or other dynamics in cells of interest [8]. These experimental approaches have caused excitement as they allow for the examination of behaviors of cells or whole organisms upon physical stimulation of even just single cells. The continued expansion of these approaches is promising.

Conclusion

Many researchers are devoted to developing new and distinct calcium indicators based on existing indicator series. With more GECIs than ever available to neuroscientists, there is some challenge in choosing which is best suited to the exploration of a particular question. With the continuing development of mouse lines and methods of genetically encoding more potent indicators with high temporal resolution, GECIs will continue to be an increasingly important tool within the neuroscientist’s toolkit that allows for population or single-cell imaging with greater resolution than ever before.

 

References:

  1. Lin M, Schnitzer M. 2016. Genetically encoded indicators of neuronal activity. Nature Neuroscience 19(9):1142-1153.
  2. Broussard G, Ruqiang L, Tian L. 2014. Monitoring activity in neural circuits with genetically encoded indicators. Frontiers Molecular Neuroscience 7.
  3. Cranfill P, et al. 2016. Quantitative assessment of fluorescent proteins. Nature Methods 13(7):557-563.
  4. Baird G, Zacharias D, Tsien R. 1999. Circular permutation and receptor insertion within green fluorescent proteins. PNAS 96(20):11241-11246.
  5. Zacharias D, et al. 2002. Partitioning of Lipid-Modified Monomeric GFPs into Membrane Microdomains of Live Cells. Science 296:913-916.
  6. Knöpfel T. 2012. Genetically encoded optical indicators for the analysis of neuronal circuits. Nature Neuroscience. 13:687-700.
  7. Wang W, Kim C, Ting A. 2019. Molecular tools for imaging and recording neuronal activity. Nature Chemical Biology 15:101-110.
  8. Piatkevich K, Murdock M, Subach Fedor. 2019. Advances in Engineering and Application of Optogenetic Indicators for Neuroscience. Applied Sciences 9(3):562.
  9. Shen Y, et al. 2018. A genetically encoded Ca2+ indicator based on circularly permutated sea anemone red fluorescent protein eqFP578. BMC Biology 16(9).
  10. Shen Y, et al. 2020. Engineering genetically encoded fluorescent indicators for imaging of neuronal activity: Progress and prospects. Neuroscience Research 152:3-14.
  11. Dana H, et al. 2019. High-performance calcium sensors for imaging activity in neuronal populations and microcompartments. Nature Methods 16:649-657.
  12. Molina R, et al. 2019. Understanding the Fluorescence Change in Red Genetically Encoded Calcium Ion Indicators. Biophysical Journal 116:1873-1886.
  13. Kondo, Masashi, et al. 2017. Two-photon calcium imaging of the medial prefrontal cortex and hippocampus without cortical invasion. eLife Neuroscience.
  14. Qian Y, et al. 2019. A genetically encoded near-infrared fluorescent calcium ion indicator. Nature Methods 16:171-174.

The Relationship Between Itch and Pain in Itch Pathways

By Nathifa Nasim, Neurobiology, Physiology & Behavior ‘22 

Author’s Note: Itch is not a stranger to any of us, but growing up with eczema, I have always been hyper aware of it. As far back as I can remember, burning hot showers and painful levels of scratching temporarily alleviated the maddening sensation of itch without my understanding of how pain was linked to itch. Once I joined the Carstens Lab studying the relationship between itch and pain, these memories were rekindled, and I became interested in not only understanding itch, which we know so little of, but how these two sensations interact. This paper was also written for my UWP 104E class.

 

Introduction

Itch is an everyday sensation that nearly all people have experienced. Its origins lie in its role as a defense mechanism — when faced with irritant stimuli, the scratching urge produced by itch can remove potentially harmful substances [1]. However, despite its evolutionary advantages, itch is often a source of discomfort and for many can dramatically impact their quality of life. Itch not only includes acute itch such as mosquito bites, but debilitating chronic itch can stem from different diseases such as cancer, HIV/AIDS, liver/kidney failure, atopic dermatitis, and other skin disorders [2]. However, despite the widespread impacts of itch, much of its mechanisms and pathways still remain elusive. 

Itch is a somatosensory sensation, relying on the nervous system for detection and perception, therefore similar to other somatosensory sensations such as heat, touch, vibration, and most importantly, pain. Pain and itch have an antagonistic relationship, meaning each sensation has an opposing effect on the other. This is evident in “painful” scratching which relieves the feeling of itch, and that morphine administration reduces pain while increasing itch [3]. The intersection between the two sensations translates to potential treatment as well: chronic itch, for example, can be treated by medications similar to chronic pain [2]. Researching the interplay between itch and pain can help illuminate the pathophysiology of itch and how it is perceived as a sensation different from pain, and consequently lead to a better understanding of treating itch. 

Currently, there are numerous models and theories proposed to explain this overlap, however, there is no consensus amongst itch researchers on which model(s) may best explain the relationship between pain and itch. This review will be an overview of the various models of itch transduction and perception and how they have evolved with the accumulation of new research. It will also cover the basic mechanisms of itch at the level of the periphery and spinal cord and how it interacts with pain. 

 

Overview of Itch Mechanisms

Itch Activation at the Level of the Epidermis

Pruriceptors are neurons capable of detecting itch; these can be activated by either mechanical stimuli, a scratchy fabric for instance, or chemical stimuli, such as poison ivy. For simplicity, and as the chemical pathway is currently better understood, the paper will focus on chemical itch from here onwards. Similar to other somatosensory neurons, the cell bodies of the primary pruriceptive neurons reside in the dorsal root ganglion (DRG), close to the spinal cord, with axons stretching to both the periphery and the spinal cord [4, 5]. Unlike other sensory modalities, itch is specific to the outermost epidermis only, as opposed to pain, which can be felt in the muscle and bone. The pruriceptors’ branched sensory nerve endings which terminate in the epidermis are studded with membrane receptors activated by various “itchy” mediators [4]. The receptors differ in the mediators they respond to but can be broadly grouped into histamine receptors, serotonin receptors, G Protein-coupled receptors (GPCRs), toll-like receptors (TLRs), or cytokine and chemokine receptors [4,5]. 

Once acute itch is triggered by an irritant, keratinocytes, mast cells, and immune cells release chemical mediators which trigger vasodilation, inflammation, and the arrival of more immune cells to clear the irritant. The chemical mediators can include histamine, serotonin, proteases, cytokines, and chemokines, each of which is associated with a certain receptor [4]. The activation of itch from internal factors in disease differs from acute itch in that it is instead dependent on unknown mediators in the bloodstream from drugs or diseased organs [4]. Despite the origin of the chemical mediators, however, once released they bind to the receptors on the free nerve endings and activate them. The receptors then depend on various ion channels to depolarize the pruriceptor neuron which conveys the sensory information to the spinal cord via its axon [4, 5]. 

Itch Transmission to the Spinal Cord

The pruriceptive DRG neurons’ axon also ends in the spinal dorsal horn. These then synapse onto interneurons in the spinal cord which connect to projection neurons that carry the sensory information to the brain. The interneurons are important for transmission as well as modulation of itch via excitatory and inhibitory synapses [4, 5, 6]. Electrophysiological responses to itch stimuli in primates have identified the projection neurons as belonging to the spinothalamic tract, which carries axons to the thalamus. This tract also conveys pain and temperature and is consequently an area of itch interaction with other modalities [5, 7]. In addition to interneurons, descending modulation in the spinal cord can also regulate itch. After applying a cervical cold block to mice – activity of the upper cervical spinal cord level was essentially stopped – mice were unable to relieve itch and decrease neuronal firing when the lumbar spinal cord neurons below were stimulated by an itchy substance. This suggests that there is some level of descending modulation that was disrupted when the upper spinal cord was damaged [8].

 

Areas of Itch and Pain Interaction

Having briefly discussed the pathways for itch perception, it is important to note how often it converges with that of pain. Firstly, pruriceptors are in fact pruriceptive nociceptive neurons, meaning they are a subset of nociceptors, or pain-sensitive neurons. Although there are many non-pruriceptive nociceptors (neurons sensitive to pain but not itch), studies have pointed towards most pruriceptors being stimulated by pain as well as itch [1, 2]. One method of explaining this convergence is the expression of TRPV1 ion channels in pruriceptors. Although these are important for detecting itch, it is also expressed in nociceptors, and is stimulated by the classic pain stimulus found in peppers, capsaicin [4]. 

The relationship between itch and pain continues to the spinal cord. As mentioned, the spinothalamic tract (STT) is of special interest in understanding the distinction between itch and pain, as both sensations traverse the same pathway. Transecting the anterolateral funiculus where the tract ascends has eliminated sensitivity to itch, pain, and temperature, establishing the common usage of the tract by these sensations [6]. Electrophysiological recordings of primate STT neurons when given different types of sensory stimuli also revealed that two thirds of the nociceptors were sensitive to itch stimuli as well as pain, again highlighting the apparent overlap between itch and pain in the spinal cord [9]. 

The relationship between itch and pain is best understood as antagonistic. Recordings of STT pruriceptive neurons showed that after being stimulated by histamine (itch/pruritic stimuli) the neuronal firing decreased when the skin was scratched. However, the same neuron increased firing after scratching in response to capsaicin. Although being activated by both pain and itch stimuli, the difference in response to scratching suggests an antagonistic relationship between pruriceptive and nociceptive neurons via inhibitory interneurons [6]. 

The intersection between pain and itch raises the question of the brain’s perception of pain and itch as distinct in the presence of much overlap. There are numerous theories and models attempting to explain the nature of this relationship, which will be overviewed in the following sections. 

 

Classical Models of Itch and Pain Discrimination 

Intensity Model 

Given observations on the overlaps between itch and pain, itch was first theorized to be a subset of pain in the intensity model. This postulates that polymodal neurons (sensitive to many modalities) differentiate between itch and pain through patterns of firing or “intensity” due to weak or strong stimulation [1, 4, 6, 10]. The model was tested by delivering electrical pulses to the skin that varied in frequency. Although the results seemingly disproved the theory as it only increased the intensity of itch felt, rather than transforming it to pain, the theory has not yet been discounted [6]. Itch stimuli has been shown to trigger lower firing rates compared to painful stimuli in both peripheral and STT neurons, suggesting that firing rates do have some role in itch perception [6, 9]. Furthermore, both itch and pain stimuli give rise to “bursting” patterns of action potentials, and the interburst interval is shorter in response to capsaicin/pain. This suggests some level of temporal coding, when information is coded based on the timing of action potentials or intervals between them. This aligns with the intensity model as a polymodal neuron could code for itch and pain depending on the rate of action potentials or their intervals [9]. 

The intensity model’s basic principle lies in neurons activated by both pain and itch, and seemingly aligned with the previously mentioned research identifying pruriceptors as a subset of nociceptors and activated by both pain and itch. However, the discovery of itch-specific neurons further complicated the validity of the model, lending support to the labeled line model instead. 

Labeled Line or Specificity Theory

Labeled line refers to the idea that there exists a specific, separate “line” or neural pathway devoted to the sensation of itch – the opposite of the intensity model’s polymodal neurons. As early as the 1800s, researchers discovered there are specific spots on the skin which are activated by different sensory modalities: coolness, heat, pain, etc., giving rise to the labeled line theory. Recent electrophysiological studies have supported this for different sensations through establishing the presence of sensory fibers and spinal relay neurons tuned to only one sensory modality [11].

The labeled line theory’s validity for itch was confirmed by the presence of itch specific neurons. GRPR3+ neurons in the spinal cord were identified that differed from STT neurons in that they carried purely itch information [3, 12]. This was evident as when these neurons were treated with a toxin, not only was there loss of itch behavior (scratching), there was no change in pain behavior (wiping) [12]. The discovery of these itch specific neurons was emphasized by the consequent discovery of MrgprA3+ neurons in the dorsal horn which were itch specific as well; their deletion also resulted in loss of itch behavior only [7]. Furthermore, the neurons gave rise to purely itch behavior regardless of the nature of the stimulus – precisely as predicted by the labeled line [7]. These discoveries gave significant support to the labeled line theory, yet the presence of itch neurons activated by pain remained a dilemma. 

 

Modified Models of Itch and Pain 

The theories of intensity and labeled line represent the two ends of the spectrum in understanding pain signalling – the first depends on polymodal neurons, and the latter on itch specific neurons. The discovery of neurons that fall under both complicate their validity, and suggest that an accurate model should include neurons sensitive to both itch and pain while capable of differentiating between the two [1]. 

Spatial Contrast Model

Spatial models expand on the intensity model, and do not require itch and pain specific neurons. It proposes that itch is felt in “spatial contrast,” or when a small population of nociceptors are activated, and pain is felt when a larger population is activated due to a stronger stimulus [6,10]. In a study, it was observed that a spicule (small pointed end) of both histamine (itch stimuli) and capsaicin resulted in itch sensation, yet an injection of only capsaicin resulted in pain activation [13]. This could be explained by the spatial contrast theory in that the spicule activated a small number of nociceptors, resulting in itch, whereas the more widespread injection stimulated a larger number of nociceptors, resulting in pain. 

According to the model, a small number of even non-pruriceptive nociceptors activated should result in itch, eliminating the need for a labeled line. However, there remains an obstacle in this model as well – there was no decrease of itch sensation relative to pain when the area of exposure to stimuli increased, although the model predicts this should in theory activate a greater number of receptors [6, 13]. 

Selectivity Theory and Population Coding 

The population coding theory – also known as the selectivity theory – modifies the labeled line, proposing that although there are specific sensory labeled lines, the antagonistic interaction between them shapes perception of itch. It takes into account the overlap between nociceptors and pruriceptors as well as pain’s inhibition of itch, proposing that pruriceptors are a smaller subset of nociceptors and are linked to them by inhibitory interneurons [1,11]. Theoretically, activation of the larger nociceptive population – including pruriceptors – is felt as pain, as the activation of the pain neurons “masks” the sensation of itch. Yet if only the smaller itch specific subset is activated, this is felt as purely itch, as there is no activation, and consequently no inhibition from the nociceptive neurons [1, 11, 14].

There have been numerous studies that appear to support this hypothesis. In one, the vesicular glutamate transporter VGLUT2 was deleted from DRG nociceptive neurons, affecting their ability to signal. This resulted in spontaneous itch in mice along with a decrease in pain behavior and, importantly, itch behavior resulting from capsaicin injections [14]. These results were paralleled in another study where blocking pruriceptors had no effect on pain, yet deleting TRPV1 in a group of nociceptive neurons led to capsaicin to be perceived as itch [15]. These two studies suggest groups of nociceptors are involved in inhibiting and masking itch, as deleting their receptors results in itch signaling instead, supporting the population coding theory. 

It is also necessary to identify an inhibitory neuron to explain the antagonistic relationship between the nociceptors and pruriceptors; Bhlhb5 neurons are one such interneuron. When Bhlhb5 was knocked out in mice, there was increased itch behavior that ultimately resulted in lesions from itching and licking [16]. This suggests that the interneuron, and perhaps other interneurons as well, are responsible for inhibiting and regulating itch, further bolstering the support for the population coding model. 

Gate Control and Leaky Gate Model

The gate control theory hypothesizes that nociceptive transmission neurons in the spinal cord receive input from both nociceptive primary neurons, and Aβ fibers: primary neurons attuned to non-nociceptive stimuli such as touch. These Aβ fibers in turn inhibit the nociceptive neurons via interneurons, effectively creating a “gate” that can halt transmission of pain or itch [10]. The previously mentioned Bhlhb+ interneurons support this gate control model as well [10, 16]. 

This model was recently further refined into the “leaky gate” theory. This builds on the intensity theory and modifies the gate control theory by substituting Grp+ neurons in the role of Aβ fibers. Grp+ spinal cord neurons receive strong input from pain sensory neurons and weak input from itch specific neurons, coding for itch in an intensity dependent manner and inhibiting pain. This model is different from gate control in that it lets weak pain signals “leak” through while suppressing strong pain signals to prevent an overwhelming pain sensation. When strongly activated by pain, these interneurons inhibit pain, whereas due to weak activation from itch, it does not inhibit pain [10]. This model is able to explain the phenomenon that itch is often accompanied with a prickly, burning pain: it proposes that itch is not strong enough to inhibit pain sensation, resulting in a weak pain sensation accompanying itch [10].

 

Conclusion

A few of the major theories of itch perception have been discussed in an attempt to illuminate how itch is attenuated by the presence of pain in an inverse relationship. The intensity theory and the labeled line theory are both supported by the presence of polymodal neurons and itch specific neurons, respectively. However, given their opposing views, the accuracy of both theories is undermined by support for the other; this indicates the need for a model that is able to reconcile itch specificity with neurons attuned to both itch and pain. 

The following models attempted to ease the apparent discord between the two previous models.  The spatial model expands on the intensity model while providing a possible mechanism by which pain and itch could be felt from the same population of neurons. On the other hand, the population coding model expands on itch specific neurons of the labeled line while accommodating the inverse relationship between itch and pain. Lastly, the leaky gate model combines aspects of both intensity and selectivity theories. 

These theories attempt to explain itch and pain crosstalk; the importance of understanding this relationship is seen in both acute and chronic itch pathophysiology in cases of crosstalk dysfunction. The previously discussed Bhlhb+ neurons are a prime example of the consequences of impaired itch and pain interaction [2, 16]. Research has shown that knocking out these interneurons – thereby severing the connection between itch and pain – results in chronic itch-like behavior such as lesions from scratching [16]. This suggests that chronic itch may result from uninhibited, unregulated itch when pain is no longer permitted to suppress itch [2, 16]. 

This example highlights the importance of the application of the interaction between pain and itch. Not only does understanding the intersection between the two sensations provide a better understanding of itch mechanisms, the very intersection itself has an important role in itch pathophysiology, of which there is much that is still unknown. With the advent of new discoveries of new aspects of the itch pathway, these current models will continue to develop. 

 

References:

  1. Patel KN, Dong X. 2010. An Itch To Be Scratched. Neuron. 68(3):334-339. doi: 10.1016/j.neuron.2010.10.018.
  2. Liu T, Ji RR. 2013. New insights into the mechanisms of itch: are pain and itch controlled by distinct mechanisms?. Pflugers Arch. 465(12):1671-1685. doi:10.1007/s00424-013-1284-2
  3. Barry DM, Munanairi A, Chen ZF. 2018. Spinal Mechanisms of Itch Transmission. Neurosci Bull. 34(1):156-164. doi:10.1007/s12264-017-0125-2
  4. Dong X, Dong X. 2018. Peripheral and Central Mechanisms of Itch. Neuron. 98 (3):482-494, doi: 10.1016/j.neuron.2018.03.023.
  5. Lay M, Xinzhong D. Neural Mechanisms of Itch. Annual Review of Neuroscience,  43:187-205. doi: 10.1146/annurev-neuro-083019-024537
  6. LaMotte RH, Dong X, Ringkamp M. 2014. Sensory neurons and circuits mediating itch. Nat Rev Neurosci. 15(1):19-31. doi: 10.1038/nrn3641
  7. Han L, Ma C, Liu Q, Weng H, Cui Y, Tang Z,  Hao-Jui Weng, Yiyuan Cui, Zongxiang Tang, Yushin Kim, Hong Nie, Lintao Qu, Kush N Patel, Zhe Li, Benjamin McNeil, Shaoqiu He, Yun Guan, Bo Xiao, Robert H LaMotte & Xinzhong Dong .2013. A subpopulation of nociceptors specifically linked to itch. Nat Neurosci 16, 174–182. doi: 10.1038/nn.3289
  8. Carstens E, Iodi MC, Akiyama T, Davoodi A, Nagamine M. 2018. Opposing effects of cervical spinal cold block on spinal itch and pain transmission, Itch. 3(3):16 doi: 10.1097/itx.0000000000000016
  9. Davidson S, Zhang X, Khasabov SG, Moser HR, Honda CN, Simone DA, Giesler GJ Jr. 2012. Pruriceptive spinothalamic tract neurons: physiological properties and projection targets in the primate. J Neurophysiol. 108(6):1711-23. doi: 10.1152/jn.00206.2012. 
  10. Sun S, Xu Q, Guo C, Guan Y, Liu Q, Dong X. 2017. Leaky Gate Model: Intensity-Dependent Coding of Pain and Itch in the Spinal Cord. Neuron. 93(4):840-853.e5. doi:10.1016/j.neuron.2017.01.012
  11. Ma Q. 2010. Labeled lines meet and talk: population coding of somatic sensations. J Clin Invest. 120(11):3773-3778. doi:10.1172/JCI43426 
  12. Sun YG, Zhao ZQ, Meng XL, Yin J, Liu XY, Chen ZF. 2009. Cellular basis of itch sensation. Science. 325(5947):1531-1534. doi:10.1126/science.1174868
  13. Sikand P, Shimada SG, Green BG, LaMotte RH. 2009. Similar itch and nociceptive sensations evoked by punctate cutaneous application of capsaicin, histamine and cowhage. Pain, 144(1-2), 66–75. https://doi.org/10.1016/j.pain.2009.03.001
  14. Liu Y, Abdel Samad O, Zhang L, Duan B, Tong Q, Lopes C, Ji RR, Lowell BB, Ma Q. 2010. VGLUT2-dependent glutamate release from nociceptors is required to sense pain and suppress itch. Neuron, 68(3), 543–556. https://doi.org/10.1016/j.neuron.2010.09.008
  15. Roberson DP, Gudes S, Sprague JM, Patoski HA, Robson VK, Blasl F, Duan B, Oh SB, Bean BP, Ma Q, Binshtok AM, Woolf CJ. 2013. Activity-dependent silencing reveals functionally distinct itch-generating sensory neurons. Nature neuroscience 16(7):910-8. doi: 10.1038/nn.3404. 
  16. Ross SE, Mardinly AR, McCord, AE, Zurawski J, Cohen S, Jung C, Hu L, Mok SI, Shah A, Savner EM, Tolias C, Corfas R, Chen S, Inquimbert P, Xu Y, McInnes RR, Rice FL, Corfas G, Ma Q, Woolf CJ, Greenberg ME. 2010. Loss of inhibitory interneurons in the dorsal spinal cord and elevated itch in Bhlhb5 mutant mice. Neuron, 65(6), 886–898. https://doi.org/10.1016/j.neuron.2010.02.025

Non-Invasive Brain Stimulation Therapies as Therapeutics for Post-Stroke Patients

By Priyanka Basu, Neurobiology, Physiology & Behavior ‘22

Author’s Note: I wrote this review article during my time in UWP102B this past quarter, though my inspiration in digging deeper into this topic came from my personal experience with my uncle who had recently incurred a stroke to his brain leading him to face its detrimental effects. I realized I wanted to investigate the possible solutions there were for him and others, allowing me to consequently further my knowledge about this field of study. I’d love for readers to understand the complexity and dynamics that non-invasive brain stimulation therapies have on post-stroke patients, and its beneficial effects when used in conjunction with other therapies. Though studies are in their preliminary phases and there are quite a bit of unknowns, it is still important to keep in mind the innumerable therapeutics being created that target patient populations experiencing a certain extent of brain damage- their results are absolutely phenomenal.

 

Abstract

Non-invasive brain stimulation therapies have become an overwhelmingly dominant innovation of biotechnology that has proven to be greatly effective for treating post-cerebral damage. Stimulation therapies use magnetic fields that can induce electric fields in the brain by administering intense electric currents that pulse through neural circuits. Although several stimulation therapies exist, the therapies discussed in this review include the most widely used therapeutic technologies: transcranial magnetic stimulation (TMS), repetitive transcranial magnetic stimulation (rTMS), transcranial direct current stimulation (tDCS), and theta-burst stimulation (TBS). Post-stroke patients often experience significant impairments to their sensorimotor systems that may include the inability to make arm or hand movements, while other impairments include memory or behavioral incapacities. Stimulatory therapies have been shown to allow for certain neuronal excitability that can improve the impairments seen in these patients unlike alternative standardized procedures. Although the individual efficacies of stimulation therapies have shown viable outcomes, current research dives into how the use of stimulation therapies in conjunction with secondary therapeutics can have synergistic effects.

 

Introduction:

Basic stimulation therapies were first put to clinical use in 1985 to investigate the workings of the human corticospinal system [1]. The magnetic field that is produced by stimulation is capable of penetrating through the scalp and neural tissue, easily activating neurons in the cortex and strengthening the electrical field of the brain [1]. By inducing depolarizing currents and action potentials in certain regions of the brain, patients with damaged areas of the cerebral cortex found great relief as they regained a degree of normal functionality in their motor, behavioral, or cognitive abilities [1]. 

In recent years, stroke has become the second leading cause of death in the United States [2]. Neurologically speaking, stroke can interrupt blood flow in regions of the brain, such as the motor cortex, weakening overall neurological function throughout the body [2]. Stimulatory therapies are used in these cases to successfully activate neurons which jumpstarts their firing capabilities and rewires the body’s normal functionality [1]. Although certain reperfusion therapies using thrombolysis have been seen to treat certain ischemic (i.e. hemorrhaged) tissue in stroke patients by removing deadly clots in blood vessels, these therapeutics are often starkly inaccessible to the general population because of their price tag and scarcity [2]. Oftentimes, even standard pharmacological drugs prove ineffective [2]. By way of heavy experimentation, scientists have discovered that the brain can simply reconstruct itself through a method called, “cortical plasticity,” allowing for neural connections to be modified back to their normal firing pattern [3]. By understanding this innate and adaptive tool that the brain possesses, researchers invented the method of stimulatory therapies to essentially boost our own neural hardware [2].

Over the years, by investigating how these therapies and their mechanisms can work in conjunction with other therapeutics on post-stroke patients, an in-depth understanding of further possible advantageous therapies can be made. 

 

Mechanism of Non-Invasive Neural Stimulation

Most current noninvasive brain stimulation therapies use similar methodologies involving the induction of magnetic fields or electrical currents along cerebral cortical regions of the skull and brain to induce rapid excitation of neurons [4]. Some of the most common noninvasive brain stimulation (NIBS) techniques currently used are transcranial magnetic stimulation (TMS), repetitive transcranial magnetic stimulation (rTMS), transcranial direct current stimulation (tDCS), and theta-burst stimulation (TBS) [4]. Our brains reorganize innately after stroke or cerebral damage through mechanisms of cortical plasticity [3]. However, non-invasive stimulation therapies can stimulate cortical plasticity by quickly modulating neural connections through electrical activation for efficient neuronal and/or motor recovery after the incident [3]. According to Takeuchi et al., TMS and other similar therapies stimulate the cortex through the scalp and the skull. This method positions a coiled wire over the scalp to generate a local magnetic field [4]. As these magnetic fields are pulsed and begin to enter the brain, they establish an electrical current that stimulates cortical neurons which induces a neuronal depolarization (i.e. excitation) [4]. rTMS involves a similar mechanism as TMS, but it has a greater rate of repetition of the ejected magnetic stimulation inducing a higher frequency current [5]. Meanwhile, TBS therapy is a modification of rTMS. While TBS has a similar degree of frequency to rTMS, TBS involves larger bursts of magnetic stimulation rather than small, frequent action [6]. 

When understanding the degree that noninvasive brain stimulation works on cortical neural plasticity, it is best to see its functionality in the motor cortexone of the most easily damaged regions of the brain in stroke patients [4]. Neuronally, NIBS can excite the damaged hemisphere allowing for an increase in activity of the opposite or ‘ipsilesional’ motor cortex [4]. This excitement is highly inducible and is required for proper motor learning and functioning in normal human behavior [7,8]. In addition, these therapeutics may also induce certain metabolic changes that stimulate our innate neural plastic network for successful post-stroke motor recovery [4]. Over time, and with continuous electric stimulation therapy, long-term potentiation of our neural hardware can lead to swift recovery of the affected hemisphere [3]. By this method of magnetic stimulation on damaged cortical regions of the brain, post-stroke patients can recover faster than ever before. 

 

Excitability of Motor and Behavioral Neural Networks 

Ultimately, NIBS treatments induce excitability of motor and behavioral neural networks that allow for the atrophy of affected cerebral regions and increase neural plasticity in the region [5]. In a study led by Delvaux et al., TMS therapies were used to excite changes in the reorganization of motor cortical areas of post-stroke patients [9]. Scientists investigated a group of 31 patients that experienced an ischemic stroke in their middle cerebral artery which led to severe hand palsy [9]. The patients were clinically assessed with the Medical Research Council, the National Institutes of Health stroke scales, and Barthel Index on certain dates of experimentation after stroke [9]. From the data collected, when damaged regions were measured by electrical motor-evoked potential (MEP) amplitudes, the areas were initially statistically smaller than the unaffected areas, thus indicating a lesser degree of motor activation resulting from the effects of certain damaged regions of the brain [9]. After the affected regions were treated with focal transcranial magnetic stimulation (fTMS), a specific type of TMS therapy, the stimulation ultimately induced excitability of affected motor regions as well as unaffected motor regions due to the inducible nature of connected regions in the brain [9]. This study evaluates a TMS technique involving MEP amplitude measurements and FDI motor maps unique to most other stimulatory therapies, including rTMS, tDCS, and others, helping to physiologically understand the impacts of neurological damage in the brain. Although the study hosted a relatively small sample size of twenty participants, it  can be considered sufficient as per the extremity of the experimental design and scarcity of possible participants. The study participants, ranging between 45 and 80 years old, were tested for any underlying neurological disorders to reduce confounding factors. By testing these participants using a standardized scaling method and MEP potentials, the study qualified as a well-regulated results-directive for a conclusive study despite a relatively small sample size.

A similar study conducted by Boggio et al. further investigated the effects of NIBS on motor and behavioral neural networks by using variant-charged (anodal (+) and cathodal (-)) current stimulations on stroke patients and then identifying enhanced results. The investigation studied a specific brain stimulatory therapeutic (tDCS) on its excitability and potential benefits on post-stroke patients [7]. Investigators were able to test the motor performance and improvements in stroke patients using two experiments [7]. Experiment 1 was conducted during four weekly sessions using sham (controlled magnetic stimulation), anodal (increased magnetic stimulation), and cathodal (decreased magnetic stimulation) transcranial direct current stimulation (tDCS) therapies [7]. In Experiment 2, five daily sessions of only cathodal tDCS treatments were investigated on affected brain regions [7]. The effects were reported following the procedure and blindly evaluated using the Jebson-Taylor Hand Function Test, a standardized test to measure gross motor hand function [7]. Between the two experiments, the most significant motor and behavioral improvements were found using the three stimulations in Experiment 1 [7]. However, when stimulations were compared individually, viable motor functional improvement was still evident with either cathodal or anodal tDCS on unaffected and affected hemispheres respectively when compared to the sham tDCS therapy [7]. Using daily sessions instead of weekly was found to be more beneficial in terms of lasting treatment results [7]. Investigators were able to conclude that their findings show strong support in relation to other tDCS research on motor function improvement in stroke patients [7]. tDCS is considered safe, representative, and inexpensive allowing for the possibility of further research on the technique with a wider range of patients. The study could have included additional evaluations of the different motor capabilities rather than just focusing on the hand itself to allow for variation, additional variables, and details that could supply the research rather than simply validating the technique. Both experiments analyzed above resulted in statistically significant results and represented the excitable capabilities of stimulatory therapies currently used for post-stroke patients. 

 

Effectivity of Alternative Neural Therapeutics in Conjunction with NIBS Therapies 

Although standard NIBS therapies have been shown to provide impressive solutions for post-stroke patients, there have been few studies understanding the prospects of using NIBS in conjunction with other therapies for these patients. Aphasia, a rapid decline of the ability to acknowledge or express speech, is a common neurological disorder often seen in post-stroke patients as a result of damage to speech and language control centers of the brain [10]. A number of therapeutics not only search for solutions to certain post-stroke motor dysfunctionalities, but also the behavioral dysfunctions of stroke including aphasias. For several years, previous studies have investigated the use of intonation-based intervention (melodic intonation therapy (MIT)), on severe non-fluent aphasia patients showing immense benefits [10]. A study conducted by Vines et al. (2011) expanded on these findings and implemented this therapy of MIT alongside an additional brain stimulatory therapy of transcranial direct current stimulation (tDCS) to understand if there are augmented benefits of MIT in patients with non-fluent aphasia [10]. Six patients with moderate to severe non-fluent aphasia underwent three days of anodal-tDCS therapy with MIT and another three days with sham-tDCS therapy with MIT [10]. The two types of treatments were separated by one week and assigned randomly [10]. The study showed that compared to the effects of the sham-tDCS with MIT therapies, the anodal-tDCS with MIT led to statistically significant improvements in the patients’ fluency of speech [10]. The study was able to solidify that the brain can properly reorganize and heal damage to its language centers through combined therapies of anodal-tDCS and MIT thus revamping the neurological activity of non-fluent aphasia patients [10]. However, one important component that was lacking in this experiment was a large number of subjects for reliable results. With six patients in the study, scientists could have increased the number tested to allow for greater sufficiency and valid results. Although this study lacked in size, it did include a range of participant ages relieving confounding effects of age-related neurological differences. 

An additional study important to the investigation of understanding the prospects of conjunctive stimulatory therapy was conducted in 2012 by Avenanti et al. The study sought to understand the possible benefits of combining non-invasive brain stimulation therapies (rTMS) with physical therapy. Many studies have investigated the effects of TMS alone on chronic stroke patients but few have investigated the combination of TMS with physical therapy.  In a double-blind, randomized, experiment, Avenanti et al. (2012) investigated a group of 30 patients who were given either real or sham transcranial magnetic stimulations (rTMS) either before or after physical therapy (PT) [5]. The outcomes of this experiment were evaluated based on dexterity and force manipulations of motor control [5]. The results of the study found that overall, patients that were given real rTMS treatments developed statistically better behavioral and neurophysiological outcomes when used in conjunction with PT but were more greatly enhanced when stimulated before physical therapy in a sequential manner [5]. Improvements were detected in all conjunctive groups (real or sham/before or after PT), and even with PT alone in certain experimental groups [5]. Researchers were able to conclude that treating chronic stroke patients with motor disabilities with rTMS before PT provided optimal results of motor excitability, though its conjunctive outcome was effective as well [5]. With statistically significant results, the study indicates valid conjunctive benefits of both PT and rTMS therapy for the patients evaluated [5]. Regarding the reliability of this study, each method was properly implemented for results to be sustained allowing for proper controls in sham trials [5].  

Conjunctive therapies offer new insight into possible avenues for advantageous treatments for post-stroke patients rather than when used alone. With new investigations in this field of study, unknown outlets are slowly being uncovered, allowing for better solutions to cerebral and ischemic damage. 

 

Conclusion:

Non-invasive brain stimulation (NIBS) therapies are a well-refined and successful therapeutic for post-stroke patients. Although much of the mainstream solutions to damaged cerebral regions are NIBS therapies, current research is still searching to identify qualifying conjunctive therapies with NIBS to ameliorate treatments. Standard stimulatory procedures use measurable magnetic or electric currents to depolarize or excite regions of the brain to stimulate neurons for proper activity. By doing so, our innate system of neural plasticity works with this stimulation to enhance the recovery of damaged cerebral regions. In recent years, scientists have taken a step further and combined stimulatory therapies with additional stroke therapy to further enhance results. Although early research processes have begun, more studies and trials are necessary to provide for sufficient data to strongly confirm their efficacies, even when promising results have already been found. Several studies lack the number of participating patients, data, and resources needed to successfully prove these conjunctive therapies. Further understanding of these treatments through repeated trials, larger sample sizes, and statistically significant results may lead to a better understanding in the future of possible effective conjunctive treatments for post-stroke patients. 

 

References:

  1. Santos MD dos, Cavenaghi VB, Mac-Kay APMG, Serafim V, Venturi A, Truong DQ, Huang Y, Boggio PS, Fregni F, Simis M. 2017. Non-invasive brain stimulation and computational models in post-stroke aphasic patients: single session of transcranial magnetic stimulation and transcranial direct current stimulation. A randomized clinical trial. Sao Paulo Medical Journal. 135(5): 475–480.
  2. Kubis N. Non-Invasive Brain Stimulation to Enhance Post-Stroke Recovery. 2016. Front Neural Circuits. 10:56.
  3. Chen R., Cohen L. G., Hallett M. 2002. Nervous system reorganization following injury. Neuroscience 111, 761–773.
  4. Takeuchi N, Izumi S. 2012. Noninvasive brain stimulation for motor recovery after stroke: mechanisms and future views. Stroke Res Treat. 584727. 
  5. Avenanti A., Coccia M., Ladavas E., Provinciali L., Ceravolo M. G. 2012. Low-frequency rTMS promotes use-dependent motor plasticity in chronic stroke: a randomized trial. Neurology 78, 256–264.
  6. van Lieshout ECC, Visser-Meily JMA, Neggers SFW, van der Worp HB, Dijkhuizen RM. 2017. Brain stimulation for arm recovery after stroke (B-STARS): protocol for a randomised controlled trial in subacute stroke patients. BMJ open. 7(8): e016566.
  7. Boggio P. S., Nunes A., Rigonatti S. P., Nitsche M. A., Pascual-Leone A., Fregni F. 2007. Repeated sessions of noninvasive brain DC stimulation is associated with motor function improvement in stroke patients. Restor Neurol Neurosci. 25, 123–129.
  8. Bucur M, Papagno C. 2018. A systematic review of noninvasive brain stimulation for post-stroke depression. Journal of affective disorders. 238: 69–78.
  9. Delvaux V., Alagona G., Gérard P., De Pasqua V., Pennisi G., Maertens de Noordhout A. 2003. Post-stroke reorganization of hand motor area: a 1-year prospective follow-up with focal transcranial magnetic stimulation. Clin. Neurophysiol. 114, 1217–1225.
  10. Vines BW, Norton AC, Schlaug G. 2011.  Non-invasive brain stimulation enhances the effects of melodic intonation therapy. Frontiers in psychology. 2:230.

A Neuroimmunological Approach to Understanding SARS-CoV-2

By Parmida Pajouhesh, Neurobiology, Physiology & Behavior ‘23

Author’s Note: The Coronavirus Disease has undoubtedly affected us in many sectors of our lives. There has been a lot of discussion surrounding the respiratory symptoms induced by the disease but less focus on how contracting the disease can result in long-term suffering. As someone who is fascinated by the brain, I wanted to investigate how COVID-19 survivors have been neurologically impacted post-recovery and what insight it can provide on more severe neurological disorders. 

 

The Coronavirus Disease (COVID-19) has drastically changed our lives over the past fifteen months. The viral disease produces mild to severe symptoms, including fever, chills, and nausea. There are individual differences in the length of recovery, typically ranging from 1-2 weeks after contraction [1]. Once recovered, those infected are assumed to be healthy and “back to normal,” but data shows that this is not the case for some COVID-19 survivors. COVID-19 has resulted in more severe long-term effects for patients, greatly affecting their ability to perform daily tasks. Taking a deeper look into the neuroimmunological side effects of COVID-19 can help explain the long-term symptoms experienced by survivors. 

Developing our knowledge of long-term neurological effects on COVID-19 survivors is crucial in understanding the risk of cognitive impairments, including dementia and Alzheimer’s disease [2].

A team led by Dr. Alessandro Padovani at the University of Brescia recruited COVID-19 survivors with no previous neurological disease or cognitive impairment for check-ins six months after infection [3]. The exam assessed motor and sensory cranial nerves and global cognitive function. The results showed that the most prominent symptoms were fatigue, memory complaints, and sleep disorder.  Notably, these symptoms were reported much more frequently in patients who were older in age and hospitalized for a longer period of time [3]. 

Other symptoms reported include “brain fog,” a loss of taste or smell, and brain inflammation [2]. Researchers hypothesize that the virus does not necessarily need to make its way inside neurons to result in “brain fog” but instead claim that it is an attack on the sensory neurons, the nerves that extend from the spinal cord throughout the body to gather information from the external environment. When the virus hijacks nociceptors, neurons that are specifically responsible for sensing pain, symptoms like brain fog can follow [4].

Theodore Price, a neuroscientist at the University of Texas at Dallas, investigated the relationship between nociceptors and angiotensin-converting enzyme 2 (ACE2), a protein embedded in cell membranes that allows for viral entry when the spike protein of SARS-CoV-2 binds to it [4, 5]. The nociceptors live in clusters around the spinal cord, which are called dorsal root ganglia (DRG). Price determined that a set of DRG neurons did contain ACE2, enabling the virus to enter the cells. The DRG neurons that contained ACE2 had messenger RNA for the sensory protein MRGPRD, which marks neurons with axons concentrated at the skin, inner organs and lungs. If sensory neurons are infected with the virus, it can result in long-term consequences. It might not be the case that the virus is directly entering the brain and infecting the sensory neurons. Alternatively, it is the immune response triggering an effect on the brain, which leads to the breakdown of the blood-brain barrier surrounding the brain [6]. While this area of research is still under investigation, studies have shown that the breakdown of the blood-brain barrier and lack of oxygen to the brain are hallmarks of Alzheimer’s disease and dementia. Scientists are tracking global function to further understand the impact of COVID-19 treatments and vaccines on these neurological disorders. 

Understanding whether the cause of neurological symptoms is viral brain infection or immune activity is important to clinicians who provide intensive care and prescribe treatments [2, 6]. With future studies, researchers plan to further examine the causes of these symptoms. This knowledge will hopefully provide COVID-19 survivors with adequate support to combat these difficulties and reduce their risk of developing a more severe neurological disorder in the future.

 

References :

  1. Sissons, Beth. 2021. “What to Know about Long COVID.” Medical News Today. www.medicalnewstoday.com/articles/long-covid#diagnosis
  2. Rocheleau, Jackie. 2021. “Researchers Are Tracking Covid-19’s Long-Term Effects On Brain Health.” Forbes. www.forbes.com/sites/jackierocheleau/2021/01/29/researchers-are-tracking-covid-19s-long-term-effects-on-brain-health/?sh=59a0bb284303
  3. George, Judy. 2021. “Long-Term Neurologic Symptoms Emerge in COVID-19.” MedPage Today. www.medpagetoday.com/infectiousdisease/covid19/90587
  4. Sutherland, Stephani. 2020. “What We Know So Far about How COVID Affects the Nervous System.” Scientific American. www.scientificamerican.com/article/what-we-know-so-far-about-how-covid-affects-the-nervous-system
  5. Erausquin, Gabriel A et al. 2021. “The Chronic Neuropsychiatric Sequelae of COVID‐19: The Need for a Prospective Study of Viral Impact on Brain Functioning.” Alzheimer’s & Dementia. Crossref, doi:10.1002/alz.12255  
  6. Marshall, Michael. 2020. “How COVID-19 Can Damage the Brain.” Nature. www.nature.com/articles/d41586-020-02599-5?error=cookies_not_supported&code=5b856480-d7e8-4a22-9353-9000e12a8962  

Psychedelics Herald New Era of Mental Health

By Macarena Cortina, Psychology ‘21 

Author’s Note: As a psychology major who used to be a plant biology major, I’m very interested in the arenas where these two fields interact. Such is the case with psychoactive plants and fungi that produce significant alterations in brain chemistry and other aspects of the human psyche. That is why I chose to write about psychedelics and their rebirth in both research and culture. In the past few months, I have seen increasing media coverage of new scientific findings about these substances, as well as legal advancements in their decriminalization, making this a relevant topic in the worlds of psychology and ethnobotany. The history of psychedelics is a long and complicated one, but here I attempt to cover the basics in hopes of demystifying these new powerful therapeutic treatments and informing readers about the latest horizon in mental health. 

 

After decades in the dark, psychedelic drugs are finally resurfacing in the world of science and medicine as potential new tools for mental health treatment. Psychedelics, otherwise known as hallucinogens, are a class of psychoactive substances that have the power to alter mood, perception, and cognitive functions in the human brain. They include drugs such as LSD, magic mushrooms, ayahuasca, MDMA, and peyote [1]. The US has a long and complex history with these drugs, and the resulting criminalization and stigma associated with them have kept psychedelics in the shadows for many years. However, a major shift in society’s opinions of psychedelics is taking place, and a reawakening is happening in the scientific community. Researchers from various disciplines are becoming increasingly interested in unlocking the therapeutic powers of these compounds, especially for those who are diagnosed with mental disorders and are resistant to the treatments that are currently available for them. Whether or not the world is ready for it, the psychedelic renaissance has begun.

Psychedelics have been used by Indigenous communities around the world as part of their cultural, spiritual, and healing traditions for thousands of years. In the Western world, psychedelics were rediscovered in the 1940s by Swiss chemist Albert Hofmann, who accidentally absorbed LSD through his skin while conducting tests for a potential medicine [2]. What followed was an “uninterrupted stream of fantastic pictures, extraordinary shapes, with intense, kaleidoscopic play of colors” [7]. Once LSD was disseminated throughout the world, psychologists began to experiment with it as a psychotomimetic, or a drug that mimics psychosis, in hopes of gaining a better understanding of schizophrenia and similar mental disorders [2, 3]. In the 1950s, as a result of the US government’s fear that communist nations were using mind control to brainwash US prisoners of war, the CIA carried out the top-secret project MK-Ultra, drugging even unwitting subjects with psychedelics in an attempt to learn about potential mind control techniques [4]. Recreational use of psychoactive substances proliferated in the counterculture movement of the 1960s, eventually leading to their criminalization and status as Schedule 1 drugs [5]. This classified them as substances with no medical value and a high potential for abuse—two descriptors we know are not factual [6].  

Now, people seem to be reevaluating their outlook on these formerly demonized drugs and are instead looking for ways to harness psychedelics’ medicinal properties for mental and physical improvement. Momentum is building quickly. Clinical trials are beginning to show real potential in the use of psychedelics for the treatment of depression, anxiety, post-traumatic stress disorder (PTSD), addiction, eating disorders, and emotional suffering caused by diagnosis of a terminal illness. The US Food and Drug Administration (FDA) has already approved the use of ketamine for therapeutic purposes with MDMA and psilocybin set to follow [7]. Psilocybin has also been decriminalized in cities across the US and was completely legalized for medical use in the entire state of Oregon in November 2020. Entrepreneurs and investors are flocking to startups such as MAPS Public Benefit Corporation and Compass Pathways, which are currently developing psychedelic drugs for therapeutic application. Research centers have been cropping up across the country as well, even at prestigious institutions like John Hopkins School of Medicine and Massachusetts General Hospital. 

So how do psychedelics work? In truth, scientists still don’t know exactly what happens to neural circuitry under the influence of these mind-altering drugs. While more research is required to fully understand how psychedelics affect the brain, there are some findings that help clarify this mystery. For example, the major group of psychedelics—called the “classic psychedelics”—closely resembles the neurotransmitter serotonin in terms of molecular structure [8]. This group includes psilocin, one of the important components of magic mushrooms; 5-MeO-DMT, which is present in a variety of plant species and at least one toad species; and LSD, also known as acid [8]. What they all have in common is a tryptamine structure, characterized by the presence of one six-atom ring linked to a five-atom ring [8]. This similarity lends itself to a strong affinity between these psychedelics and serotonin receptors in the cerebral cortex, particularly the receptor 5-HT2A [8]. The implication of this is that psychedelics can have a significant and widespread influence on brain chemistry, given that serotonin is one of the main neurotransmitters in the brain and plays a major role in mood regulation [9].

What follows is a poorly understood cascade of effects that causes disorganized activity across the brain [10]. At the same time, it seems that the brain’s default-mode network gets inhibited. British researcher Robin Carhart-Harris recently discovered this by dosing study participants with either psilocybin or LSD and examining their neural activity with the help of fMRI (functional magnetic resonance imaging). Rather than seeing what most people expected—an excitation of brain networks—Dr. Carhart-Harris found a decrease of neuronal firing in the brain, specifically in the default-mode network. According to Michael Pollan, author of the best-selling book on psychedelics How to Change Your Mind, this network is a “tightly linked set of structures connecting the prefrontal cortex to the posterior cingulate cortex to deeper, older centers of emotion and memory.” Its function appears to involve self-reflection, theory of mind, autobiographical memory, and other components that aid us in creating our identity. In other words, the ego—the conscious sense of self and thus the source of any self-destructive thoughts that may arise—seems to be localized in the default-mode network. This network is at the top of the hierarchy of brain function, meaning it regulates all other mental activity [10].

Therefore, when psychedelics enter the system and quiet the default-mode network, suddenly new and different neural pathways are free to connect, leading to a temporary rewiring of the brain [10]. In many cases, this disruption of normal brain functioning has reportedly resulted in mystical, spiritual, and highly meaningful experiences. Psychedelics facilitate neuroplasticity, thereby helping people break negative thinking patterns and showing them—even temporarily—that it’s possible to feel another way or view something from a different (and more positive) perspective. 

This kind of experience can be immensely helpful to someone who is struggling with a mental health disorder and needs a brain reset. While other techniques, such as meditation and general mindfulness, can help cultivate a similar feeling, they require much more time and effort, something that is not always feasible—and never easy—for those who are severely struggling with their mental health [10]. Psychedelics can help jump-start the process of healing, and their effects can be made even more powerful and long-lasting when coupled with psychotherapy [11]. Talking with a psychiatrist or psychologist after the drug treatment can help integrate and solidify a client’s newly acquired thinking patterns [11]. 

In a study published in The New England Journal of Medicine in April 2021, researchers found that psilocybin works at least as well as leading antidepressant escitalopram [12]. In this double-blind, randomized, controlled trial, fifty-nine participants with moderate-to-severe depression took either psilocybin or escitalopram, along with a placebo pill in both cases. After six weeks, participants in both groups exhibited lower scores on the 16-item Quick Inventory of Depressive Symptomatology–Self-Report (QIDS-SR-16), indicating an improvement in their condition. The difference in scores between the two groups was not statistically significant, meaning that a longer study with a larger sample size is still required to show if there is an advantage to treating depression with psilocybin over conventional drugs [12]. However, one notable difference was that psilocybin seems to take effect faster than escitalopram [13]. As an SSRI (selective serotonin reuptake inhibitor), escitalopram takes a couple months to work, something that’s not helpful for those with severe depression. Psilocybin, then, is suggested to provide more immediate relief to people battling depression [13]. 

In June 2020, a team of researchers at John Hopkins published a meta-analysis of nine clinical trials concerning psychedelic-assisted therapy for mental health conditions such as PTSD, end-of-life distress, depression, and social anxiety in adults with autism [14]. These were all the “randomized, placebo-controlled trials on psychedelic-assisted therapy published [in English] after 1993.” The psychedelics in question included LSD, psilocybin, ayahuasca, and MDMA. Following their statistical meta-analysis of these trials, they found that the “overall between-group effect size at the primary endpoint for psychedelic-assisted therapy compared to placebo was very large (Hedges g = 1.21). This effect size reflects an 80% probability that a randomly selected patient undergoing psychedelic-assisted therapy will have a better outcome than a randomly selected patient receiving a placebo” [14]. 

There were only minimal adverse effects reported from this kind of therapy and no documentation of serious adverse effects [14]. When compared to effect sizes of pharmacological agents and psychotherapy interventions, the effects of psychedelic-assisted therapy were larger, especially considering the fact that participants received the psychedelic substance one to three times prior to the primary endpoint, as opposed to daily or close-to-daily interventions with psychotherapy or conventional medications. Overall, results suggest that psychedelic-assisted therapy is effective—with minimal adverse effects—and presents a “promising new direction in mental health treatment” [14].

At UC Davis, researchers in the Olson Lab recently engineered a drug modeled after the psychedelic ibogaine [15]. This variant, called tabernanthalog (TBG), was designed to induce the therapeutic effects of ibogaine minus the toxicity or risk of cardiac arrhythmias that make consuming ibogaine less safe. TBG is a non-hallucinogenic, water-soluble compound that can be produced in merely one step. In an experiment performed with rodents, “tabernanthalog was found to promote structural neural plasticity, reduce alcohol- and heroin-seeking behavior, and produce antidepressant-like effects.” These effects should be long lasting given that TBG has the ability to modify the neural circuitry related to addiction, making it a much better alternative to existing anti-addiction medications. And since the brain circuits involved in addiction overlap with those of conditions like depression, anxiety, and post-traumatic stress disorder, TBG could help treat various mental health issues [15]. 

As the psychedelic industry begins to emerge, members of the psychedelic community are voicing their concerns about the risks that come with rapid commercialization [7]. Biotech companies, researchers, and therapists should be careful about marketing psychedelics as a casual, quick fix to people’s problems. Psychedelics can occasion intense and profound experiences and should be consumed with the right mindset, setting, and guidance. There are still many unknowns about psychedelic use, especially its long-term effects. Not all individuals should try treatment with psychedelics, especially those with a personal or family history of psychosis. It will also be important to move forward in a way that is respectful to Indigenous traditions and accessible to all people—particularly people of color—without letting profit become the main priority. Some advocates worry that commercialization and adoption into a pharmaceutical model might strip psychedelics of their most powerful transformational benefits and that they will wind up being used merely for symptom resolution [7]. As long as psychedelics’ reintroduction to mainstream medicine is handled mindfully, the world may soon have a new avenue for effective mental health therapy that honors its Indigenous heritage and is accessible to all. 

 

References:

  1. Alcohol & Drug Foundation. Psychedelics. October 7, 2020. Available from https://adf.org.au/drug-facts/psychedelics/
  2. Williams L. 1999. Human Psychedelic Research: A Historical And Sociological Analysis. Cambridge University: Multidisciplinary Association for Psychedelic Studies. 
  3. Sessa B. 2006. From Sacred Plants to Psychotherapy:The History and Re-Emergence of Psychedelics in Medicine. Royal College of Psychiatrists.
  4. History. MK-Ultra. June 16, 2017. Available from https://www.history.com/topics/us-government/history-of-mk-ultra
  5. Beres D. Psychedelic Spotlight. Why Are Psychedelics Illegal? October 13, 2020. Available from https://psychedelicspotlight.com/why-are-psychedelics-illegal/
  6. United States Drug Enforcement Administration. Drug Scheduling. Available from https://www.dea.gov/drug-information/drug-scheduling.
  7. Gregoire C. NEO.LIFE. Inside the Movement to Decolonize Psychedelic Pharma. October 29, 2020. Available from https://neo.life/2020/10/inside-the-movement-to-decolonize-psychedelic-pharma/
  8. Pollan M. How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. New York: Penguin Press; 2018.
  9. Bancos I. Hormone Health Network. What is Serotonin? December 2018. Available from https://www.hormone.org/your-health-and-hormones/glands-and-hormones-a-to-z/hormones/serotonin#:~:text=Serotonin%20is%20the%20key%20hormone, sleeping%2C%20eating%2C%20and%20digestion
  10. Pollan M, Harris S, Silva J, Goertzel B. December 11, 2020. Psychedelics: The scientific renaissance of mind-altering drugs. YouTube: Big Think. 1 online video: 20 min, sound, color. 
  11. Singer M. 2021. Trip Adviser.Vogue. March issue: 198-199, 222-224. 
  12. Carhart-Harris R, Giribaldi B, Watts R, Baker-Jones M, Murphy-Beiner A, Murphy R, Martell J, Blemings A, Erritzoe D, Nutt DJ. 2021. Trial of Psilocybin versus Escitalopram for Depression. N Engl J Med [Internet]. 384:1402-1411. doi: 10.1056/NEJMoa2032994.
  13. Lee YJ. Business Insider Australia. A landmark study shows the main compound in magic mushrooms could rival a leading depression drug. April 14, 2021. Available from https://www.businessinsider.com.au/psilocybin-magic-mushroom-for-depression-takeaways-from-icl-report-nejm-2021-4
  14. Luoma JB, Chwyl C, Bathje GJ, Davis AK, Lacelotta R. 2020. A Meta-Analysis of Placebo-Controlled Trials of Psychedelic-Assisted Therapy. Journal of Psychoactive Drugs [Internet]. 52(4):289-299. doi: 10.1080/02791072.2020.1769878.
  15. Cameron LP, Tombari RJ, Olson DE, et al. 2020. A non-hallucinogenic psychedelic analogue with therapeutic potential. Nature [Internet]. 589:474–479. https://doi.org/10.1038/s41586-020-3008-z.

Potential Therapeutic Effects of sEH Inhibition in Neurological Disorders

By Nathifa Nasim, Neurobiology, Physiology & Behavior ‘22 

Author’s note: I was recently introduced to this topic and the potential for sEH inhibition in the context of Alzheimer’s while at Dr. Lee-Way Jin’s lab in the MIND Institute. Further research into the topic outside the lab led to the realization of the broader implications of sEH inhibition across numerous neurological disorders through its role in inflammatory pathways. The paper aims to illustrate the therapeutic potential sEH inhibition in neurological disorders through inflammation mediation.

 

Introduction

Neuroinflammation is a symptom of nearly all neurological disorders, and occurs when the immune system of the brain is activated. Microglia, an immune cell in the brain, release inflammatory mediators typically associated with neuroprotection and neurogenesis. However, incessant inflammation can be associated with harmful effects and neurological diseases. Therefore, treatments that target inflammatory processes are of interest to researchers as they have the potential to alleviate numerous disorders. The soluble epoxide hydrolase enzyme (sEH), an enzyme that modulates various physiological processes through the regulation of inflammatory pathways, is one such possible therapeutic agent. Research on the effect of sEH inhibition provides an avenue for treating different neurological disorders such as brain injury, depression, and Alzheimer’s. 

sEH and the Role of EETs

The sEH enzyme is important primarily through its ability to metabolize fatty acids in inflammatory pathways. It is found across mammalian tissues, such as in the brain or the liver, where it is highly expressed. As a member of the epoxide hydrolase family of enzymes, sEH’s C-terminus is responsible for the enzyme’s characteristic epoxide hydrolase activity, converting epoxides by adding a water molecule. The N-terminus, on the other hand, has phosphatase activity instead [2]. Due to the different activities of the two sides of the enzyme, the N-terminal phosphatase domain has been linked to cholesterol metabolism, while the C-terminal domain has been associated with inflammatory and cardiovascular effects.  

The epoxide hydrolase activity of sEH is performed on epoxyeicosatrienoic acids (EETs), epoxy fatty acids that act as lipid messengers in the body. Due to their nature as lipid messengers, EETs have numerous roles, such as being anti-inflammatory messengers and increasing vasodilation. An increase in EET levels provides anti-inflammatory, antihypertensive, neuroprotective, and cardioprotective effects [2]. These protective effects of EET may be especially acute in the brain. For example, the inhibition of the cytochrome p450 enzyme results in an absence of EET, which reduces cerebral blood flow by 30 percent, indicating a critical need for EET in brain circulation [4].

Inhibition of sEH, a negative regulator of EET, increases these protective effects of EETs. The enzyme lowers EET activity via conversion to dihydroxyeicosatrienoic acids (DHETs), which have less anti-inflammatory effects, and can be pro-inflammatory instead. Numerous studies have knocked out or selectively inhibited sEH which stabilizes or increases EET activity because it is no longer being converted into DHET. It is through sEH’s metabolism of EETs that it is linked to numerous diseases and various physiological effects, as EETs are critical in controlling inflammation. Research on the possible therapeutic effects of sEH is therefore primarily focused on increasing fatty acid/EET levels through inhibition of sEH. 

Inflammation in Brain Injury

Neuroinflammation is a critical aspect of traumatic brain injury (TBI), or brain damage via physical injury. TBI results in cerebral inflammation, which activates microglia to produce post-traumatic inflammatory mediators and reactive oxygen species (ROS). While inflammatory mediators function as an immune response after injury, their overproduction after TBI can lead to neural damage, apoptosis (cell death) and/or brain edema (swelling due to the accumulation of fluids). This additional damage occurring in the brain stimulates an increased microglia response, resulting in a positive feedback mechanism [5]. 

Inhibition of this post-traumatic stress-induced inflammation is a common mechanism in treating TBI. Given sEH’s role in inflammation, deletion of sEH via genetic knockouts has been shown to improve these effects. Deletion of sEH decreases the number of activated microglia post-TBI, and as a result, reduces the release of inflammatory mediators, thereby reducing edema, apoptosis, and inflammation [5]. The rise in EET levels associated with sEH inhibition also results in anti-inflammatory effects, and possibly an increase in neurotrophic factors, or growth factors that have a neuroprotective effect [5]. 

sEH and Alzheimer’s

Neuroinflammation is also linked to both the onset and progression of Alzheimer’s. Prolonged activation of glial cells astrocytes and microgliaincrease proinflammatory molecules which lead to a “neurotoxic” environment that aggravates the disease [6]. Alzheimer’s patients have nearly twofold higher sEH levels in the astrocytes, leading to lower EET levels and thus lower anti-inflammatory effects [7]. Given the importance of inflammation in Alzheimer’s, inhibition of sEH should increase EET levels, promoting anti-inflammatory effects crucial to treating the disease. 

One experiment treated 5xFAD mice, transgenic mice that express Alzheimer’s phenotypes, with an sEH inhibitor TPPU. The results indicated that not only did the mice have increased EET levels, the inhibitor had also affected gene expression in the hippocampus by downregulating proinflammatory genes. This effectively “calmed” the overactive immune response correlated with Alzheimer’s. Further testing uncovered that these treated mice had fewer, smaller amyloid plaques, or proteins that are considered to play a critical role in Alzheimer’s and are often regarded as a biomarker for the disease, and fewer microglia surrounding these plaques [6]. As prolonged activation of microglia leads to proinflammatory effects and reduced phagocytosis of amyloid plaques, lowering the immune response of these glial cells could theoretically reverse these inflammatory effects. The reduction effects of EETs on ROS may also be involved, as ROS may contribute to neurotoxicity [7]. These results were achieved with TPPU and verified with another sEH inhibitor; it also aligned with results seen in genetic knockouts of sEH [6]. 

sEH in Peripheral Organs

In addition to the brain, peripheral organs may also play an important role in sEH’s effects on neurological disorders. In addition to astrocytes and other glial cells, sEH is most expressed in the liver. With the liver being the largest gland and responsible for a variety of metabolic activities, hepatic sEH (sEH of the liver) has an influence throughout the body. In a study on major depressive disorder (MDD) by Qin et al., researchers used mice in a chronic mild stress model (CMS), where they were exposed to varying stressors to mimic the effects of depression in an animal model. They found that chronic stress increased sEH levels in mice liver, suggesting hepatic sEH levels are linked to stress and depression [8]. Increased sEH levels in these mice via targeted gene therapy not only led to depressive-like behaviors, but there was also a decrease in proteins that modulate synaptic plasticity, suggesting that sEH parallels the effects of stress at the molecular level. On the other hand, deleting the gene that codes for sEH in the liver induced an antidepressant effect in the CMS mice [8]. In other words, sEH induced depressive-like effects, and inhibiting sEH activity led to antidepressant-like effects, even in stressed model mice. These findings were paralleled in MDD human patients who had lower EET levels compared to the control group, suggesting higher sEH activity in patients with depression, again similar to the Alzheimer’s patients [8]. 

Conclusion

Given the anti-inflammatory abilities of EETs, further development of sEH inhibitors has the capacity to affect the treatment of multiple neurodegenerative diseases which are associated with inflammation. In addition to those mentioned, other neurological conditions and disorders have also exhibited elevated sEH levels, such as Parkinson’s, schizophrenia, and seizures [6]. As these diseases have different pathologies, the cause for elevated sEH levels may vary and is still under research. Nevertheless, the implication of sEH in a variety of diseases expands the therapeutic range of sEH inhibitors.

As seen in the aforementioned MDD study by Qin et al., peripheral organs’ usage of sEH may also be involved with neurological disorders, such as in the case of hepatic sEH. This not only points to a possible liver-brain axis or connection between the two organs, it also opens another avenue of research into the effects of sEH across the body. Diseases such as depression and heart disease have been implicated with sEH in previous studies, for example, and at present, sEH inhibition has been successful in decreasing blood pressure levels [9]. Development and further research into sEH inhibition and effects therefore have the potential to touch numerous conditions and parts of the human body. 

The research regarding Alzheimer’s and TBI both implemented pharmacological sEH inhibitors, in which only the C-terminal hydrolase domain of the enzyme was affected while the N-terminal phosphatase activity was left intact [5,6]. As the N-terminal has been associated with cholesterol metabolism, its role in neurological disorders provides another possible area of study of sEH in treating these disorders. Genetic deletion of the gene that codes for sEH, as seen in the Qin et al. study, deletes both the hydrolase and phosphatase activity of the enzyme. Therefore, studies implementing sEH knockouts may have benefitted from the loss of the phosphatase activity as well as the hydrolase [9]. As a result, further research into different aspects of the enzyme broadens the possibility of its usage and potential. 

 

References:

  1. Wee Yong V. 2010. Inflammation in neurological disorders: a help or a hindrance? Neuroscientist. 16(4):408-20. doi: 10.1177/1073858410371379
  2. Harris TR, Hammock BD. 2013. Soluble epoxide hydrolase: gene structure, expression and deletion. Gene. 526(2):61-74. doi: 10.1016/j.gene.2013.05.008. 
  3. Flores DM, Natalia CS, Regina PA, Regina CC. 2020. Soluble Epoxide Hydrolase and Brain Cholesterol Metabolism. Frontiers in Molecular Neuroscience. 12: 325. doi: 10.3389/fnmol.2019.00325    
  4. Sura P, Sura R, Enayetallah AE, Grant DF. 2008. Distribution and expression of soluble epoxide hydrolase in human brain. J Histochem Cytochem. 56(6):551-9. doi: 10.1369/jhc.2008.950659. 
  5. Hung TH, Shyue SK, Wu CH, Chen CC, Lin CC, Chang CF, Chen SF. 2017. Deletion or inhibition of soluble epoxide hydrolase protects against brain damage and reduces microglia-mediated neuroinflammation in traumatic brain injury. Oncotarget. 8(61):103236-103260. doi: 10.18632/oncotarget.21139.
  6. Ghosh A, Comerota MM, Wan D, Chen F, Propson NE, Hwang SH, Hammock BD, Zheng H. 2020. An epoxide hydrolase inhibitor reduces neuroinflammation in a mouse model of Alzheimer’s disease. Sci Transl Med. 12(573):eabb1206. doi: 10.1126/scitranslmed.abb1206.
  7. Griñán-Ferré C, Codony S, Pujol E, Companys-Alemany J, Corpas R, Sanfeliu C, Pérez B, Loza MI, Brea J, Morisseau C, Hammock BD, Vázquez S, Pallàs M, Galdeano C. 2020. Pharmacological Inhibition of Soluble Epoxide Hydrolase as a New Therapy for Alzheimer’s Disease. Neurotherapeutics 17:1825–1835. doi: 10.1007/s13311-020-00854-1
  8. Qin X, Wu Z, Dong JH, Zeng YN, Xiong WC, Liu C, Wang MY, Zhu MZ, Chen WJ, Zhang Y, Huang QY, Zhu XY. 2019. Liver Soluble Epoxide Hydrolase Regulates Behavioral and Cellular Effects of Chronic Stress. Cell Reports. 29(10): 3223-3234. doi:10.1016/j.celrep.2019.11.006.
  9. Spector AA, Fang X, Snyder GD, Weintraub NL. 2004. Epoxyeicosatrienoic acids (EETs): metabolism and biochemical function. Progress in Lipid Research. 43(1): 55-90. Doi:10.1016/S0163-7827(03)00049-3.

A Dive into a Key Player of Learning and Memory: An Interview with Dr. Karen Zito

Image by MethoxyRoxy – Own work, CC BY-SA 2.5

By: Neha Madugala, Neurology, Physiology, and Behavior, ‘21

Author’s Note: After writing a paper for the Aggie Transcript on the basics of dendritic spines, I wanted to take a more in-depth look at current research in this field by interviewing the UC Davis professor Karen Zito, who is actively involved in dendritic spine research. While there are still a lot of questions that remain unanswered within this field, I was interested in learning more about current theories and hypotheses that address some of these questions. Special thanks to Professor Zito for talking to us about her research. It was an honor to talk to her about her passion and knowledge for this exciting and complex field. 

 

Preface: This interview is a follow-up to an original literature review on dendritic spines. For a more in-depth look at general information on dendritic spines, check out this article

 

Neha Madugala (NM): Can you briefly describe some of the research that your lab does? 

Dr. Karen Zito (KZ): My lab is interested in learning and memory. Specifically, we want to understand the molecular and cellular changes that occur in the brain as we learn. Our brain consists of circuits, connecting groups of neurons, and the function of these circuits allows us to learn new skills and form new memories. Notably, the strength of connections between specific neurons in these circuits can change during learning, or neurons can form new connections with other neurons to support learning. We have been mainly focusing on structural changes in the brain. This includes questions such as the following: How do neurons change in structure during learning? How do new circuit connections get made? What are the molecular signaling pathways that are activated to allow these changes to happen while learning? How is the plasticity of neural circuits altered with age or due to disease?

NM: What are dendritic spines? 

KZ: Dendritic spines are microscopic protrusions from dendrites of neurons and are often the site of change associated with learning. Axons of one neuron will synapse onto the dendritic spine of another neuron. Spines will grow and retract during development and synapses between a spine and axon will form during learning, forming complex circuits that allow us to do intricate tasks such as playing the piano. 

NM: Transient spines only last a couple of days. What role do they play in learning? 

KZ: One hypothesis for the function of transient spines is that they exist to sample the environment, allowing the brain to speed up its ability to find the right connections required for learning. Thus, the rapid growth and retraction of transient spines in the brain helps our neurons find the right connections required to form the new neural circuits by sampling many more connections and narrowing in on the right ones. For instance, in a past study on songbirds, researchers found that baby songbirds with faster moving transient spines were able to learn songs quicker than baby songbirds with slower moving transient spines. Once these transient spines find the right connection, they will transition from transient to a permanent spine to partake in a circuit that supports a new behavior, such as the songbird learning a new song.  

NM: Can presynaptic neurons directly synapse onto a dendrite or only a dendritic spine?

KZ: Many neurons do not have spines at all. Spines are predominantly present on neurons in the higher order areas of the brain involved in learning, memory, perception and cognition. Spiny neurons are present in areas of the brain where neural connections are changing over time, or plastic — allowing the brain to learn, adjust, and change. Certain areas of the brain do not require a lot of change and, in some cases, circuit change may be detrimental to function. For example, we may not want to change connections established for movement of specific muscles.

NM: What is the difference between synapses that occur directly on a dendrite versus onto a dendritic spine?

KZ: Importantly, the molecular composition at synapses can vary widely between synapses, regardless of whether this connection occurs at a shaft or a spine. Therefore, it is difficult to name specific compositional elements always found at a spine versus a shaft. Inhibitory synapses, formed by GABAergic neurons, tend to be found directly on the shaft of dendrites. Glutamatergic neurons, which are excitatory, in the cerebral cortex tend to synapse on dendritic spines, but can also connect directly with the dendrite.  

NM: All dendritic spines have excitatory synapses that require NMDA and AMPA receptors [1]. Are these receptors necessary for these spines to exist?

KZ: To my knowledge, we do not know the answer to this question. It is possible to remove these receptors a few at a time, and spines do not disappear. However, it is really hard to remove a receptor from a single spine and, if the receptors are removed from the entire neuron, it is often replaced with another receptor in a process called compensation. In order to test if this is possible, someone would have to knock out all genes encoding AMPA receptors and NMDA receptors, which is over seven genes, to see if spines still formed. Notably, if AMPA receptors are internalized, the spine typically shrinks, and if more AMPA receptors are brought to the surface, the spine typically grows. Indeed, the number of AMPA receptors at the synapse is directly proportional to the size of the spine. 

NM: What drives spine formation and elimination when creating and refining neural circuits? 

KZ: There really is no definitive answer to this question currently, and many of those performing dendritic spine research are interested in answering these questions. Let’s first look at formation. One theory suggests that there are factors coming from neighboring neurons, such as glutamate or BDNF [2], which promote spine formation. However, it is unclear which of these are acting in vivo, in the animal. Also, spine formation is much greater in younger animals compared to older animals. That can suggest that the cells are in a different state when younger versus older. The cell can be a less plastic state where all the spines are moving slowly, seen in older animals, or more plastic states where all the spines are moving more quickly, seen in younger animals. Thus, there appears to be a combination of intrinsic state, or how plastic the cell is, and extracellular factors such as the presence of glutamate that dictates spine formation. Elimination is similar in that we do not really know the entire molecular signaling sequence that is driving it. It is a fascinating question for so many reasons. For example, when we are young we overproduce spines, and as we grow the spine density declines as our nervous system selectively chooses which connections to keep. Then, as adults, our spine density remains relatively stable. However, we obviously keep learning as adults, even though our spine density remains constant. One hypothesis is that, as a spine grows while learning, a nearby spine with no activity shrinks and eventually becomes eliminated. In fact, we have observed this phenomenon in our studies. Therefore, there may be some local competition between these spines for space. This keeps the density the same across most of the adult life span. 

NM: Does learning drive the formation of synaptic spines or does synaptic spine formation drive learning? 

KZ: This may depend on the type of learning. Both have been observed. Studies have been done imaging the brain during learning. Some people have found an increase in new spine growth suggesting that learning drives new spine formation. Other people say they found the same number of new spine growth, but a greater amount of new spine stabilization, suggesting that learning drives new spine stabilization. 

 NM: It has been observed that some intellectual disabilities and neuropsychiatric disorders are associated with an abnormal number of dendritic spines when compared to a neurotypical individual. Is this related to the insufficient production of dendritic spines at birth or deficits in pruning? 

KZ: Indeed autism spectrum disorders have been associated with an increase in spines. This could potentially be associated with an overproduction of spines or reduced spine elimination. Notably, the majority of neurological disorders resulting in cognitive deficits, such as Alzheimer’s disease, are associated with decreased spine densities. It is unclear if the spine numbers or brain function diminishes first, but much of the current research seems to suggest that the spines go away first, leading to the cognitive problems observed. In many disorders with too few spines, there is a normal formation of spines but excessive elimination. This is seen as Alzheimer and schizophrenic patients’ spine density is relatively normal prior to disease onset. For Alzheimer’s specifically, some researchers suggest that the molecular release of the pathogenic amyloid beta peptide binds to molecules on the surface of the dendritic spine that drive spine loss. 

NM: How might dendritic spine research help in treating neuropsychiatric and neurodegenerative disorders?

KZ: Current research is looking at how to stabilize and destabilize dendritic spines. If we were able to manipulate the stability of these spines, we could potentially help rescue the stability of spines in patients with neuropsychiatric disorders, which could potentially lead to better therapies and outcomes. Understanding the pathways that control the stability of these spines will allow researchers to find targets for future therapeutic treatments.

 

Footnotes

  1. Receptors that are permeable to cations. They are usually associated with the depolarization of neurons.
  2. Brain-derived neurotrophic factor (BDNF): Plays a role in the growth and development of neurons.

The Role of Dendritic Spine Density in Neuropsychiatric and Learning Disorders

Photo originally by MethoxyRoxy on Wikimedia Commons. No changes. CC License BY-SA 2.5.

By Neha Madugala, Cognitive Science, ‘21

Author’s Note: Last quarter I took Neurobiology (NPB100) with Karen Zito, a professor at UC Davis. I was interested in her research in dendritic spines and its correlation to my personal area of interest in research regarding the language and cognitive deficiencies present in different populations such as individuals with schizophrenia. There seems to be a correlational link between the generation and quantity of dendritic spines and the presence of different neurological disorders. Given the dynamic nature of dendritic spines, current research is studying their exact role and the potential to manipulate these spines in order to impact learning and memory.

 

Introduction 

Dendritic spines are small bulbous protrusions that line the sides of dendrites on a neuron [12]. Dendritic spines serve as a major site of synapses for excitatory neurons, which continue signal propagation in the brain. Relatively little is known about the exact purpose and role of dendritic spines, but as of now, there seems to be a correlation between the concentration of dendritic spines and the presence of different disorders, such as autism spectrum disorders (ASD), schizophrenia, and Alzheimer’s disease. Scientists hypothesize that dendritic spines are a key player in the pathogenesis of various neuropsychiatric disorders [8]. It should be noted that other morphological changes are also observed when comparing individuals with the mentioned neuropsychiatric disorders are compared to neurotypical individuals. However, all these disorders share the common thread of abnormal dendritic spine density.

The main disorders studied in relation to dendritic spine density are autism spectrum disorder (ASD), schizophrenia, and Alzheimer’s disease. Current studies suggest that these disorders result in the number of dendritic spines straying from what is observed in a neurotypical individual. It should be noted that there is a general decline in dendritic spines as an individual ages. However intellectual disabilities and neuropsychiatric disorders seem to alter this density at a more extreme rate. The graph demonstrates the general trend of dendritic spine density for various disorders; however, these trends may slightly vary across individuals with the same disorder. 

 

Dendritic Spines

I. Role of Dendritic Spines

Dendritic spines are protrusions found on certain types of neurons throughout the brain, such as in the cerebellum and cerebral cortex. They were first identified by Ramon y Cajal, who classified them as “thorns or short spines” located nonuniformly along the dendrite [6]. 

The entire human cerebral cortex consists of 1014 dendritic spines. A single dendrite can contain several hundred spines [12]. There is an overall greater density of dendritic spines on peripheral dendrites versus proximal dendrites and the cell body [3]. Their main role is to assist in synapse formation on dendrites. 

Dendritic Spines fall into two categories: persistent and transient spines. Persistent spines are considered ‘memory’ spines, while transient spines are considered ‘learning’ spines. Transient spines are categorized as spines that exist for four days or less and persistent spines as spines that exist for eight days or longer [5].

The dense concentration of spines on dendrites is crucial to the fundamental nature of dendrites. At an excitatory synaptic cleft, the release of the neurotransmitter at excitatory receptors on the postsynaptic cell results in an excitatory postsynaptic potential (EPSP), which causes the cell to fire an action potential. An action potential is where a signal is transmitted from one neuron to another neuron. In order for a neuron to propagate an action potential, there must be an accumulation of positive charge at the synapses, reaching a certain threshold (Figure 2). The cell must reach a certain level of depolarization – a difference in charge across the neuron’s membrane making the inside more positive. A single EPSP may not result in enough depolarization to reach this action potential threshold. As a result, the presence of multiple dendritic spines on the dendrite allows for multiple synapses to be formed and multiple EPSPs to be summated. With the summation of various EPSPs on the dendrites of the neurons, the cell can reach the action potential threshold. The greater density of dendritic spines along the postsynaptic cell allows for more synaptic connections to be formed, increasing the chance of an action potential to occur. 

Figure 2. Firing of Action Potential (EPSP)

  1. Neurotransmitter is released by the presynaptic cell into the synaptic cleft. 
  2. For an EPSP, an excitatory neurotransmitter will be released, which will bind to receptors on the postsynaptic cell. 
  3. The binding of these excitatory neurotransmitters will result in sodium channels opening, allowing sodium to go down its electrical and chemical gradient – depolarizing the cell. 
  4. The EPSPs will be summated at the axon hillock and trigger an action potential. 
  5. This actional potential will cause the firing cell to release a neurotransmitter at its axon terminal, further conveying the electrical signal to other neurons. 

 

II. Creation 

Dendrites initially are formed without spines. As development progresses, the plasma membrane of the dendrite forms protrusions called filopodia. These filopodia then form synapses with axons, and eventually transition from filopodia to dendritic spines [6]. 

The reason behind the creation of dendritic spines is currently unknown. There are a few potential hypotheses. The first hypothesis suggests that the presence of dendritic spines can increase the packing density of synapses, allowing for more potential synapses to be formed. The second hypothesis suggests that their presence can help prevent excitotoxicity, overexcitation of the excitatory receptors (NMDA and AMPA receptors) present on the dendrites. These receptors usually bind with glutamate, a typically excitatory neurotransmitter, released from the presynaptic cell. This can result in damage to the neuron or if more severe, neuronal death. Since dendritic spines compartmentalize charge [3], this feature helps prevent the dendrite from being over-excited beyond the threshold potential for an action potential. Lastly, another hypothesis suggests that the large variation in dendritic spine morphology suggests that these different shapes play a role in modulating how postsynaptic potentials can be processed by the dendrite based on the function of the signal. 

The creation of these dendritic spines is rapid during early development, slowly tapering off as the individual gets older. This process is mostly replaced with the pruning of synapses formed with dendritic spines when the individual is older. Pruning helps improve the signal-to-noise ratio of signals sent within neuronal circuits [3]. The signal-to-noise ratio outlines the ratio of signals sent by neurons and signals actually received by postsynaptic cells. It determines the efficiency of signal transmission. Experimentation has shown that the presence of glutamate and excitatory receptors (such as NMDA and AMPA) can result in the formation of dendritic spines within seconds [3]. The introduction of NMDA and AMPA results in cleavage of intracellular adhesion molecule-5 (ICAM5) from hippocampal neurons. ICAM5 is a “neuronal adhesion molecule that regulates dendritic elongation and spine maturation. [11]” Furthermore, through a combination of fluorescent dye and confocal or two-photon laser scanning microscopy, scientists were able to use imaging technology to witness that spines can undergo minor changes within seconds and more drastic conformational changes, even disappearing over minutes to hours [12]. 

 

III. Morphology

The spine head’s morphology, a large bulbous head connected to a very thin neck that attaches to the dendrite, assists in its role as a postsynaptic cell. This shape allows one synapse at a dendritic spine to be activated and strengthened without influencing neighboring synapses [12]. 

Dendritic spine shape is extremely dynamic, allowing one spine to slightly alter its morphology throughout its lifetime [5]. However, dendritic spine morphology seems to take on a predominant form that is determined by the brain region of its location. For instance, presynaptic neurons from the thalamus take on the mushroom shape, whereas the lateral nucleus of the amygdala have thin spines on their dendrites [2]. The type of neuron and brain region the spine originates from seem to be correlated to the observed morphology. 

The spine contains a postsynaptic density, which consists of neurotransmitter receptors, ion channels, scaffolding proteins, and signaling molecules [12]. In addition to this, the spine has smooth endoplasmic reticulum, which forms stacks called spine apparatus. It further has polyribosomes, hypothesized to be the site of local protein synthesis in these spines, and an actin-based cytoskeleton for structure [12]. The actin-based cytoskeleton makes up for the lack of microtubules and intermediate filaments, which play a crucial role in the structure and transport of most of our animal cells. Furthermore, these spines are capable of compartmentalizing calcium, the ion used at neural synapses that signal the presynaptic cell to release its neurotransmitter into the synaptic cleft [12]. Calcium plays a crucial role in second messenger cascades, influencing neural plasticity [6]. It also plays a role in actin polymerization, which allows for the motile nature of spine morphology [6]. 

There are many various shapes for dendritic spines. The common types are ‘stubby’ (short and thick spines with no neck), ‘thin’ (small head and thin neck), ‘mushroom’ (large head with a constricted neck), and ‘branched’ (two heads branching from the same neck) [12]. 

 

IV. Learning and Memory

Dendritic spines play a crucial role in memory and learning through occurrence of long-term potentiation (LTP), which is thought to be the cellular level of learning and memory. LTP is thought to induce spine formation, which hints at the common correlation that learning is associated with the formation of dendritic spines. Furthermore, LTP is thought to be capable of altering the immature and mature hippocampus, commonly associated with memory [2]. To contrast LTP, long-term depression (LTD) essentially works opposite to LTP – decreasing the dendritic spine density and size [2]. 

The correlation between dendritic spines and learning is relatively unknown. There seems to be a general trend suggesting that the creation of these spines is associated with learning. However, it is unclear whether learning results in the formation of these spines or if the formation of these spines results in learning. The general idea behind this hypothesis is that dendritic spines aid in the formation of synapses, allowing the brain to form more connections. As a result, a decline in these dendritic spines in neuropsychiatric disorders, such as schizophrenia, can inhibit an individual’s ability to learn. This is observed in various cognitive and linguistic deficiencies observed in individuals with schizophrenia. 

Memory is associated with the strengthening and weakening of connections due to LTP and LTD, respectively. The alteration of these spines through LTP and LTD is called activity-dependent plasticity [6]. The main morphological shapes associated with memory are the mushroom spine, a large head with a constricted neck, and the stubby spine, a short and thick spine with no neck [6]. Both of these spines are relatively large, resulting in more stable and enduring connections. These bigger and heavier spines associated with learning are a result of LTP. By contrast, transient spines (live four days or shorter) are usually smaller and more immature in morphology and function, resulting in more temporary and less stable connections. 

LTP and LTD play a crucial role in modifying dendritic spine morphology. Neuropsychiatric disorders can alter these mechanisms resulting in abnormal density and size of these spines.

 

Schizophrenia 

I. What is Schizophrenia? 

Schizophrenia is a mental disorder that results in disordered thinking and behaviors, hallucinations, and delusions [9]. The exact mechanics of schizophrenia are still being studied as researchers are trying to determine the underlying biological reasons behind this disorder and a way to help these individuals. Current treatment is focused on reducing and in some cases treating symptoms of this disorder, but more research and understanding is required to fully treat this mental disorder. 

 

II. Causation

The exact source of schizophrenia seems to lie somewhere between the presence of certain genes and environmental effects. There seems to be a correlation between traumatic or stressful life events during an individual’s adolescence to an increased susceptibility to developing schizophrenia [1]. While research is still underway, certain studies point to cannabis having a role in increasing susceptibility to schizophrenia or worsening symptoms if an individual already has schizophrenia [1]. There seems to be some form of a genetic correlation, given an increased likelihood of developing schizophrenia if present in a family member. This factor seems to result from a combination of genes; however, no genes have been identified yet. There also seems to be a chemical component, given the variation of chemical composition and density of neurotypical individuals and individuals with schizophrenia. Specifically, researchers have observed an elevated amount of dopamine found in individuals with schizophrenia [1]. 

 

III. Relationship between Dendritic Spines and Schizophrenia 

A common thread among most schizophrenia patients is an impairment of pyramidal neuron (prominent cell form found in the cerebral cortex) dendritic morphology, occurring in various regions of the cerebral cortex [7]. Observed in postmortem brain tissue studies, there seems to be a reduced density of dendritic spines in the brains of individuals with schizophrenia. These findings are consistent with various regions of the brain that have been studied, such as the frontal and temporal neocortex, the primary visual cortex, and the subiculum within the hippocampal formation [7]. Out of seven studies observing this finding, the median reported decrease in spine density was 23%, with the overall range of these various studies being a decline of 6.5% to 66% [7]. 

It should be noted that studies were done to see if the decline in spine density was due to the usage of antipsychotic drugs. However animal and human trials showed no significant difference in the dendritic spine density of tested individuals. 

This decline in dendritic spine density is hypothesized to be the result of the failure of the brain of schizophrenic individuals to produce sufficient dendritic spines at birth or if there is a more rapid decline of these spines during adolescence, where the onset of schizophrenia is typically observed [7]. The source of this decline is unclear, but seems to be attributed to deficits in pruning, maintenance, or simply the mechanisms of the underlying formation of these dendritic spines [7]. 

However, there are conflicting results. For instance, Thompson et al. conducted a study that seemed to suggest that a decline in spine density resulted in a progressive decline of gray matter, typically observed in schizophrenic individuals. Thompson et al. conducted an in vivo study of this phenomena. The study used MRI scans for twelve schizophrenic individuals and twelve neurotypical individuals, finding a progressive decline in gray matter – starting in the parietal lobe and expanding out to motor, temporal, and prefrontal areas [10]. The study suggests that the main attribution for this is a decline in dendritic spine density with the progression of the disorder. This study coincides with the previously mentioned hypothesis of a decline of spines during adolescence.  

It is also possible that there is a combination of both of these factors occurring. Most studies have only been able to observe postmortem brain tissue, creating the confusion of whether there is a decline in spines or if the spines are simply not produced in the first place. The lack of in vivo studies makes it difficult to find a concrete trend within data. 

 

Conclusion

While research is still ongoing, current evidence seems to suggest that dendritic spines are a crucial aspect in learning and memory. Their role in these crucial functions has been reflected by their absence in various neuropsychiatric disorders – such as schizophrenia, certain learning deficits present in some individuals with ASD, and memory deficits present in Alzheimer’s disease. These deficits seem to occur during the early creation of neural networks in the brain at synapses. Further research understanding the development of these spines and the creation of different morphological forms can be crucial in determining how to potentially cure or treat these deficiencies present in neuropsychiatric and learning disorders.

 

References

  1. NHS Choices, NHS, www.nhs.uk/conditions/schizophrenia/causes/.
  2. Bourne, Jennifer N, and Kristen M Harris. “Balancing Structure and Function at Hippocampal Dendritic Spines.” Annual Review of Neuroscience, U.S. National Library of Medicine, 2008, www.ncbi.nlm.nih.gov/pmc/articles/PMC2561948/.
  3. “Dendritic Spines: Spectrum: Autism Research News.” Spectrum, www.spectrumnews.org/wiki/dendritic-spines/.
  4. Hofer, Sonja B., and Tobias Bonhoeffer. “Dendritic Spines: The Stuff That Memories Are Made Of?” Current Biology, vol. 20, no. 4, 2010, doi:10.1016/j.cub.2009.12.040.
  5. Holtmaat, Anthony J.G.D., et al. “Transient and Persistent Dendritic Spines in the Neocortex In Vivo.” Neuron, Cell Press, 19 Jan. 2005, www.sciencedirect.com/science/article/pii/S0896627305000048.
  6. McCann, Ruth F, and David A Ross. “A Fragile Balance: Dendritic Spines, Learning, and Memory.” Biological Psychiatry, U.S. National Library of Medicine, 15 July 2017, www.ncbi.nlm.nih.gov/pmc/articles/PMC5712843/.
  7. Moyer, Caitlin E, et al. “Dendritic Spine Alterations in Schizophrenia.” Neuroscience Letters, U.S. National Library of Medicine, 5 Aug. 2015, www.ncbi.nlm.nih.gov/pmc/articles/PMC4454616/.
  8. Penzes, Peter, et al. “Dendritic Spine Pathology in Neuropsychiatric Disorders.” Nature Neuroscience, U.S. National Library of Medicine, Mar. 2011, www.ncbi.nlm.nih.gov/pmc/articles/PMC3530413/.
  9. “Schizophrenia.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 7 Jan. 2020, www.mayoclinic.org/diseases-conditions/schizophrenia/symptoms-causes/syc-20354443.
  10. “Schizophrenia and Dendritic Spines.” Ness Labs, 20 June 2019, nesslabs.com/schizophrenia-dendritic-spines.
  11. “Synaptic Cleft: Anatomy, Structure, Diseases & Functions.” The Human Memory, 17 Oct. 2019, human-memory.net/synaptic-cleft/.
  12. Tian, Li, et al. “Activation of NMDA Receptors Promotes Dendritic Spine Development through MMP-Mediated ICAM-5 Cleavage.” The Journal of Cell Biology, Rockefeller University Press|1, 13 Aug. 2007, www.ncbi.nlm.nih.gov/pmc/articles/PMC2064474/.
  13. Zito, Karen, and Venkatesh N. Murthy. “Dendritic Spines.” Current Biology, vol. 12, no. 1, 2002, doi:10.1016/s0960-9822(01)00636-4.

Einstein’s Fifth Symphony

By Jessie Lau, Biochemistry and Molecular Biology ‘20

Author’s Note: Growing up, playing the piano was a major part of my life— weekdays were filled with hour-long practices while Saturdays were for lessons. My schedule was filled with preparations for board exams and recitals, and in the absence of the black and white keys, my fingers were always tapping away at any surface I could find. My parents always told me learning the piano was good for my education and will put me ahead in school because it will help with my math and critical thinking in the long run. However, I was never able to understand the connection between the ease of reading music and my ability to calculate complex integrals. In this paper, I will extrapolate on the benefits of learning an instrument in cognitive development.

 

Introduction

What do Albert Einstein, Werner Heisenberg, Max Planck, and Barbara McClintock all have in common? Other than their Nobel-prize winning research in their respective fields, all of these scientists share the love of playing a musical instrument. At an early age, Einstein followed after his mother in playing the violin; Heisenberg learned to read music to play the piano at the young age of four; Planck became gifted in playing the organ and piano; McClintock played the tenor banjo in a jazz band during her time at Cornell University [1]. While these researchers spent time honing in on their musical talent, they were engaging both their central and peripheral nervous systems. Playing an instrument requires the coordination of various parts of the brain working together. The motor system gauges meticulous movements to develop sound, which is then picked up by the auditory circuitry. Simultaneously, sensory information picked up by fingers and hands are delivered to the brain. Furthermore, individuals reading music use visual nerves to send information to the brain. This is then processed and interpreted to generate a response carried out by the extremities. All the while, the sound of music elicits an emotional response from the player.

Feedforward and feedback pathways of the brain are two auditory-motor interactions that are elicited while playing an instrument. Feedforward interactions are predictive manners that can influence motor responses. For example, tapping to the rhythm of a beat in anticipation of the upcoming flux and accents in the piece. Alternatively, feedback interactions are particularly important for stringed instruments such as the violin, where pitch changes and requires continuous management [12]. As shown in Figure 1, the musician must auditorily perceive each note and respond with suitably timed motor changes. All of these neurophysiological components raise questions as to how musical training can confer brain development. Longitudinal studies find that musical training can have an expansive benefit in the development of linguistics, executive function, general IQ and academic achievement [2].

 

 

Linguistic Skills

Music shares the same dorsal auditory pathway and processing center in the brain with all other sounds. This passageway is anatomically linked by the arcuate fasciculus, thus suggesting instrumental training will translate to the manifestation of language related skills. This unilateral pathway is central to an array of language related skills, including language development, second-language acquisition, and verbal memory [2]. According to Vaquero et al, “Playing an instrument or speaking multiple languages involve mapping sounds to motor commands, recruiting auditory and motor regions such as the superior temporal gyrus, inferior parietal, inferior frontal and premotor areas, that are organized in the auditory dorsal stream” [10]. 

Researchers studying the effects of acoustic sounds mimicking stop consonant speech on language development find that children learning instruments during the critical developmental period (0-6 years old), build lasting structural and organizational modification in their auditory system that will later affect language skills. Stop consonants include voiceless sounds /p/, /t/, and /k/, as well as voiced sounds /b/, /d/, and /g/. Dr. Strait and her colleagues describe their observations, “Given relationships between subcortical speech-sound distinctions and critical language and reading skills, music training may offer an efficient means of improving auditory processing in young children” [11].

Similarly, Dr. Patel suggests the existence of an overlap of shared brain connections amongst speech and music due to the requirement for accuracy while playing an instrument. Refining this finesse demands attentional training combined with self-motivation and determination. Repeated stimulation of these brain networks garner “emotional reinforcement potential” which is key to “… good performance of musicians in speech processing” [3].

Beyond stimulating auditory neurons, instrumental training has been shown to improve verbal memory. For example, in a comparative analysis, researchers find children who have undergone musical training demonstrate verbal memory advantages compared to their peers without training [4]. Furthermore, following up a year after the initial study, they found that continued practice led to substantial advancement in verbal memory, while those who discontinued failed to show any improvement. This finding is supported by Jakobson et al in which researchers correlate, “… enhanced verbal memory performance in musicians is a byproduct of the effect of music instruction on the development of auditory temporal-order processing abilities” [5].

In the case of acquiring a second language [abbreviated L2], a study conducted on 50 Japanese adults learning English finds, “… the ability to analyze musical sound structure would also likely facilitate the analysis of a novel phonological structure of an L2” [6]. These researchers further elaborate on the potential of improving English syntax with musical exercises concentrating on syntactic processes, such as “… hierarchical relationships between harmonic or melodic musical elements” [6]. Multiple studies have also identified music training to invoke specific structures in the brain also employed during language processing, including Heschl’s gyrus and Broca’s and Wernicke’s areas [2].

While music and language elements are stored in different regions of the brain, the common auditory pathway gives way to instrumental training strengthening linguistic development in multiple areas.

 

Executive Function

Executive function is the application of the prefrontal cortex to carry out tasks requiring conscious effort to attain a goal, particularly in novel scenarios [7]. This umbrella term includes cognitive control in attention and inhibition, working memory, and the ability to task switch. Psychologists Dr. Hannon and Dr. Trainor find formal musical education to be a direct implementation of, “… domain-specific effects on the neural encoding of musical structure, enhancing musical performance, music reading and explicit knowledge of the musical structure” [8]. The combination of domain-general development and executive functioning can influence linguistics, in addition to mathematical development. Whilst learning an instrument, musicians are required to actively read music notes considered to be a unique language and focus on the ability to translate their visual findings into precise mechanical maneuvers, demanding careful focus. All the while, lending attention to identify and remedy errors in harmony, tone, beat, and fingering. Furthermore, becoming well-trained requires scheduled rehearsals to build a foundational framework for operating the instrument while learning new technical elements and building robust spatial awareness. Thus, this explicit practice of executive function during these scheduled practice sessions are essential in sculpting this region of the prefrontal cortex.

 

General IQ and Academic Performance

While listening to music has been found to also confer academic advantage, the active practice of deliberately playing music in an ordered process grants musically apt individuals scholastic benefits absent in their counterparts. In a study conducted by Schellenberg, 144 six year-olds were assigned into one of four groups– music groups included keyboard or voice lessons while control groups provided drama classes or no lesson at all. After 36 weeks, using the Wechsler Intelligence Scale for Children–Third Edition (WISC-III),composed of varying assessments to evaluate intelligence, data supports all four groups having a significant increase in IQ. However, this can also be attributed to the start of grade school. Despite the general rise in IQ, individuals that received keyboard or voice lessons proved a greater jump in IQ. Schellenberg discusses this finding of musically-trained six year-olds demonstrating elevated IQ scores to mirror that of school attendance. He reasons participation in school increases one’s IQ and the smaller the learning setting is, the more academic success the student will be able to achieve. Similarly, music lessons are often taught individually or in small groups which mirrors a school structure, thus ensuing IQ boosts.

 

Possible Confounding Factors

While these studies have found a positive correlation amongst individuals learning musical instruments with varying brain maturation skills, these researchers mark the importance in taking into consideration confounding factors that often cannot be controlled for. These include socioeconomic level, prior IQ, education, and other activities participants are associated with. While many of these researchers worked to gather subjects with similarities in these domains, all of these external elements can play an essential role during one’s developmental period. Moreover, formally learning an instrument is often a financially hefty extracurricular activity, thus more affluent families with higher educational backgrounds can typically afford these programs for their children. 

Furthermore, each study implemented varying practice times, music training durations, and instruments participants were required to learn. The data gathered from these findings can elicit results that may not be reproducible under different parameters.

Beyond these external factors, one must also consider each participant’s willingness to learn the instrument. If one does not hold the desire or motivation to become musically educated, spending the required time playing the instrument does not necessarily correlate to positive results with development as those regions of the brain are not actively utilized.

 

Conclusion

Numerous studies have been carried out to demonstrate the varying benefits music training can provide for cognitive development. Extensive research has proven that the process of physically, mentally, and emotionally engaging oneself to learning an instrument can award diverse advantages to the maturing brain. The discipline and rigor needed to garner expertise in playing a musical instrument is unconsciously translated to the experimental setting. Likewise, the unique melodious tunes found in each piece sparks creativity to propose imaginative visions. While instrumental education does not fully account for Einstein, Heisenberg, Planck and McClintock’s scientific success, this extracurricular activity has been shown to provide a substantial boost in critical thinking.

 

References

  1. “The Symphony of Science.” The Nobel Prize, March 2019. https://www.symphonyofscience.com/vids.
  2. Miendlarzewska, Ewa A., and Wiebke J. Trost. “How Musical Training Affects Cognitive Development: Rhythm, Reward and Other Modulating Variables.” Frontiers in Neuroscience 7 (January 20, 2014). https://doi.org/10.3389/fnins.2013.00279.
  3. Patel, Aniruddh D. “Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis.” Frontiers in Psychology 2 (June 29, 2011): 1–14. https://doi.org/10.3389/fpsyg.2011.00142.
  4. Ho, Yim-Chi, Mei-Chun Cheung, and Agnes S. Chan. “Music Training Improves Verbal but Not Visual Memory: Cross-Sectional and Longitudinal Explorations in Children.” Neuropsychology 17, no. 3 (August 2003): 439–50. https://doi.org/10.1037/0894-4105.17.3.439.
  5. Jakobson, Lorna S., Lola L. Cuddy, and Andrea R. Kilgour. “Time Tagging: A Key to Musicians Superior Memory.” Music Perception 20, no. 3 (2003): 307–13. https://doi.org/10.1525/mp.2003.20.3.307.
  6. Slevc, L. Robert, and Akira Miyake. “Individual Differences in Second-Language Proficiency.” Psychological Science 17, no. 8 (2006): 675–81. https://doi.org/10.1111/j.1467-9280.2006.01765.x.
  7. Banich, Marie T. “Executive Function.” Current Directions in Psychological Science 18, no. 2 (April 1, 2009): 89–94. https://doi.org/10.1111/j.1467-8721.2009.01615.x.
  8. Hannon, Erin E., and Laurel J. Trainor. “Music Acquisition: Effects of Enculturation and Formal Training on Development.” Trends in Cognitive Sciences 11, no. 11 (November 2007): 466–72. https://doi.org/10.1016/j.tics.2007.08.008.
  9. Schellenberg, E. Glenn. “Music Lessons Enhance IQ.” Psychological Science 15, no. 8 (August 1, 2004): 511–14. https://doi.org/10.1111/j.0956-7976.2004.00711.x.
  10. Vaquero, Lucía, Paul-Noel Rousseau, Diana Vozian, Denise Klein, and Virginia Penhune. “What You Learn & When You Learn It: Impact of Early Bilingual & Music Experience on the Structural Characteristics of Auditory-Motor Pathways.” NeuroImage 213 (2020): 116689. https://doi.org/10.1016/j.neuroimage.2020.116689.
  11. Strait, D. L., S. Oconnell, A. Parbery-Clark, and N. Kraus. “Musicians Enhanced Neural Differentiation of Speech Sounds Arises Early in Life: Developmental Evidence from Ages 3 to 30.” Cerebral Cortex 24, no. 9 (2013): 2512–21. https://doi.org/10.1093/cercor/bht103.
  12. Zatorre, Robert J., Joyce L. Chen, and Virginia B. Penhune. “When the Brain Plays Music: Auditory–Motor Interactions in Music Perception and Production.” Nature Reviews Neuroscience 8, no. 7 (July 2007): 547–58. https://doi.org/10.1038/nrn2152.