Home » Posts tagged 'ucdavis' (Page 13)
Tag Archives: ucdavis
The Scientific Cost of Progression: CAR-T Cell Therapy
By Picasso Vasquez, Genetics and Genomics ‘20
Author’s Note: One of the main goals for my upper division UWP class was to write about a recent scientific discovery. I decided to write about CAR-T cell therapy because this summer I interned at a pharmaceutical company and worked on a project that involved using machine learning to optimize the CAR-T manufacturing process. I think readers would benefit from this article because it talks about a recent development in cancer therapy.
“There’s no precedent for this in cancer medicine.” Dr. Carl June is the director of the Center for Cellular Immunotherapies and the director of the Parker Institute for Cancer Immunotherapy at the University of Pennsylvania. June and his colleagues were the first to use CAR-T, which has since revolutionized personal cancer immunotherapy [1]. “They were like modern-day Lazarus cases,” said Dr. June, referencing the resurrection of Saint Lazarus in the Gospel of John and how it parallels the first two patients to receive CAR-T. CAR-T, or chimeric antigen receptor T-cell, is a novel cancer immunotherapy that uses a person’s own immune system to fight off cancerous cells existing within their body [1].
Last summer, I had the opportunity to venture across the country from Davis, California, to Springhouse, Pennsylvania, where I worked for 12 weeks as a computational biologist. One of the projects I worked on was using machine learning models to improve upon the manufacturing process of CAR-T, with the goal of reducing the cost of the therapy. The manufacturing process begins when T-cells are collected from the hospitalized patient through a process called leukapheresis. In this process, the T-cells are frozen and shipped to the manufacturing facility, such as the one I worked at this summer, where they are then grown up in large bioreactors. On day three, the T-cells are genetically engineered to be selective towards the patient’s cancer by the addition of the chimeric antigen receptor; this process turns the T-cells into CAR-T cells [2]. For the next seven days, the bioengineered T-cells continue to grow and multiply in the bioreactor. On day 10, the T-cells are frozen and shipped back to the hospital where they are injected back into the patient. Over the 10 days prior to receiving the CAR-T cells, the patient is given chemotherapy to prepare their body for inoculation of the immunotherapy [2]. This whole process is very expensive and as Dr. June put it in his TedMed talk, “it can cost up to 150,000 dollars to make the CAR-T cells for each patient.” But the cost does not stop there; when you include the cost of treating other complications, the cost “can reach one million dollars per patient” [1].
The biggest problem with fighting cancer is that cancer cells are the result of normal cells in your body gone wrong. Because cancer cells look so similar to the normal cells, the human body’s natural immune system, which consists of B and T-cells, is unable to discern the difference between them and will be unable to fight off the cancer. The concept underlying CAR-T is to isolate a patient’s T-cells and genetically engineer them to express a protein, called a receptor, that can directly recognize and target the cancer cells [2]. The inclusion of the genetically modified receptor allows the newly created CAR-T cells to bind cancer cells by finding the conjugate antigen to the newly added receptor. Once the bond between receptor and antigen has been formed, the CAR-T cells become cytotoxic and release small molecules that signal the cancer cell to begin apoptosis [3]. Although there has always been drugs that help your body’s T-cells fight cancer, CAR-T breaks the mold by showing great efficacy and selectivity. Dr. June stated “27 out of 30 patients, the first 30 we treated, or 90 percent, had a complete remission after CAR-T cells.” He then goes on to say, “companies often declare success in a cancer trial if 15 percent of the patients had a complete response rate” [1].
As amazing as the results of CAR-T have been, this wonderful success did not happen overnight. According to Dr. June, “CAR T-cell therapies came to us after a 30-year journey, along with a road full of setbacks and surprises.” One of these setbacks is the side effects that result from the delivery of CAR-T cells. When T-cells find their corresponding antigen, in this case the receptor on the cancer cells, they begin to multiply and proliferate at very high levels. For patients who have received the therapy, this is a good sign because the increase in T-cells indicates that the therapy is working. When T-cells rapidly proliferate, they produce molecules called cytokines. Cytokines are small signaling proteins that guide other cells around them on what to do. During CAR-T, the T cells rapidly produce a cytokine called IL-6, or interleukin-6, which induces inflammation, fever, and even organ failure when produced in high amounts [3].
According to Dr. June, the first patient to receive CAR-T had “weeks to live and … already paid for his funeral.” When he was infused with CAR-T, the patient had a high fever and fell comatose for 28 days [1]. When he awoke from his coma, he was examined by doctors and they found that his leukemia had been completely eliminated from his body, meaning that CAR-T had worked. Dr. June reported that “the CAR-T cells had attacked the leukemia … and had dissolved between 2.9 and 7.7 pounds of tumor” [1].
Although the first patients had outstanding success, the doctors still did not know what caused the fevers and organ failures. It was not until the first child to receive CAR-T went through the treatment did they discover the cause of the adverse reaction. Emily Whitehead, at six years old, was the first child to be enrolled in the CAR-T clinical trial [1]. Emily was diagnosed with acute lymphoblastic leukemia (ALL), an advanced, incurable form of leukemia. After she received the infusion of CAR-T, she experienced the same symptoms of the prior patient. “By day three, she was comatose and on life support for kidney failure, lung failure, and coma. Her fever was as high as 106 degrees Fahrenheit for three days. And we didn’t know what was causing those fevers” [1]. While running tests on Emily, the doctors found that there was an upregulation of IL-6 in her blood. Dr. June suggested that they administer Tocilizumab to combat increased IL-6 levels. After contacting Emily’s parents and the review board, Emily was given Tocilizumab and “Within hours after treatment with Tocilizumab, Emily began to improve very rapidly. Twenty-three days after her treatment, she was declared cancer-free. And today, she’s 12 years old and still in remission” [1]. Currently, two versions of CAR-T have been approved by the FDA, Yescarta and Kymriah, which treat diffuse large B-cell lymphoma (DLBCL) and acute lymphoblastic leukemia (ALL) respectively [1].
The whole process is very stressful and time sensitive. This long manufacturing task results in the million-dollar price tag on CAR-T and is why only patients in the worst medical states can receive CAR-T [1]. However, as Dr. June states, “the cost of failure is even worse.” Despite the financial cost and difficult manufacturing process, CAR-T has elevated cancer therapy to a new level and set a new standard of care. However, there is still much work to be done. The current CAR-T drugs have only been shown to be effective against liquid based cancers such as lymphomas and non-effective against solid tumor cancers [4]. Regardless, research into improving the process of CAR-T continues to be done both at the academic level and the industrial level.
References:
- June, Carl. “A ‘living drug’ that could change the way we treat cancer.” TEDMED, Nov. 2018, ted.com/talks/carl_june_a_living_drug_that_could_change_the_way_we_treat_cancer.
- Tyagarajan S, Spencer T, Smith J. 2019. Optimizing CAR-T Cell Manufacturing Processes during Pivotal Clinical Trials. Mol Ther. 16: 136-144.
- Maude SL, Laetch TW, Buechner J, et al. 2018. Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. N Engl J Med. 378: 439-448.
- O’Rourke DM, Nasrallah MP, Desai A, et al. 2017. A single dose of peripherally infused EGFRvIII-directed CAR T cells mediates antigen loss and induces adaptive resistance in patients with recurrent glioblastoma. Sci Transl Med. 9: 399.
The Similarity of Human’s Microbiomes with Dogs
By Mangurleen Kaur, Biological Science, 23’
Author’s Note: In one of my classes of basic biology, I got to learn about microbes. That class discussed some relationships between microbes and between human beings. One of the points that stuck in my mind was the relationship of microbes between humans and one of our favorite pets, dogs. By researching this topic, I found it so astounding that I decided to write about it. I hope this piece will be interesting not only for science-lovers but also for the general public.
Both, inside and out, our bodies harbor a huge array of microorganisms. These microorganisms are a diverse group of generally minute life forms, which are called microbiota when they are found within a specific environment. Microbiota can refer to all the microorganisms found in an environment including bacteria, viruses, archaea, protozoa, and fungi. Furthermore, the collection of genomes from all the microorganisms found in a particular environment is referred to as a microbiome. According to the Human Microbiome Project (HMP), this plethora of microbes contribute more genes responsible for human survival than humans contribute. Researchers also estimated that the human microbiome consisted of 360 times more bacterial genes than human genes. Their results show that this contribution by microbes is critical for human survival. For instance, in the gastrointestinal tract bacterial genes are present, which allow humans to digest and absorb nutrients that otherwise would be unavailable. In addition to this, microbes also assist in the synthesis of many beneficial compounds, like vitamins and anti-inflammatory agents which our genome cannot produce. (4)
Where does this mini-ecosystem start from? The microbiome comes in our body as soon as we come out from the mother’s womb, we acquire them from the mother’s vagina and then, later on, by breastfeeding which plays a great role in making the microbes’ own unique community. There are several factors that influence the microbiome which include physiology, food, lifestyle, age, and environment. These are not only present in humans, but also in most animals and play a significant role in their health. For instance, gastrointestinal microorganisms exist in symbiotic associations with animals. Microorganisms in the gut assist in the digestion of feedstuffs, help protect the animal from infections, and some microbes even synthesize and provide essential nutrients to their animal host. This gives us an idea of how important these microorganisms are to our living system as a whole.(3)
Besides the human’s strong emotional connection with the dogs, there is also a biological relationship between the human and dog’s interactions. In context of this interesting relationship, research has been conducted. Computational biologist Luis Pedro Coelho and his colleagues at the European Molecular Biology Laboratory, in collaboration with Nestlé Research, studied the gut microbiome (the genetic material belonging to the microbiota) of beagles and retrievers. They found that the gene content of the dog’s microbiome showed more similarities to the human gut microbiome than to the microbiomes of pig or mice. When researchers mapped the gene content of the dog, mouse and pig microbiome in contrast to the human gut genes, they found that respectively 63%, 20% and 33% overlapped.(5) This shows the extensive similarities between human and dog’s gut microbiomes in comparison to other animals. Speaking on the discovery, Luis Pedro Coelho says: “We found many similarities between the gene content of the human and dog gut microbiome. The results of this comparison suggest that we are more similar to man’s best friend than we originally thought.” (1)
The University of Colorado Boulder did a study on the types of microbes present on the different parts of humans, to better understand the diversity and its significance for the human’s body. They conducted the study on 60 American families in which they sampled 159 people and 36 dogs. The team took samples from tongue, forehead, right and left palm and fecal samples to detect individual microbial communities. Through research, the researcher learned that people who own dogs are much more likely to share the same kinds of these “good” bacteria with their dogs. They have also learned that children who are raised with dogs are less likely than others to develop a range of immune-related disorders, including asthma and allergies. “One of the biggest surprises was that we could detect such a strong connection between their owners and pets,” said Knight, a faculty member at CU-Boulder’s BioFrontiers Institute.(6) The results found that adults who have a dog and they live together, share the greatest number of skin phylotypes while adults who neither have a dog nor live together share the least.
The University of Arizona is also conducting another research study, with some other universities including UC San Diego, in which they are seeking healthy people from Arizona age 50 or older, who have not lived with dogs for at least the past 6 months. Then they are selecting persons who would like to live with the assigned dogs. The goal of the study is to see whether the dogs enhance the health of older people and work as probiotics (good bacteria). But this research is ongoing and the outcomes are not yet released. Rob Knight, Professor of Pediatrics and Computer Science & Engineering at UC San Diego and his lab studied microbiomes. Knight and his team found that the microbial community on adult skin are on average more similar to those of their own dogs than to others. They also found that cohabiting couples share more microbes with one another if they have a dog, compared with couples who don’t have a dog. Their research suggests that a dog’s owner can be identified just by analyzing the microbial diversity of the dog and its human, as they share microbiomes. These studies are finding a critical relationship that is very helpful in microbiology and the overall health field in science. (2)
These studies reveal the various interesting relationships of microbiomes with us and other living beings. So far, the studies discussed how dog’s microbiomes are shared by the owner and how gene sequencing helps us to understand these connections. The growing understanding of this connection with microorganisms raises many other outstanding questions like what are the health benefits of a dog to a human? How can they help in preventing certain chronic diseases? This represents an exciting challenge for scientists and researchers to refine their understanding of microbiomes and find answers to these further emerging questions.
Work Cited
- “NIH Human Microbiome project defines normal bacterial makeup of the body”. National Institutes of Health, U.S. Department of Health and Human Services. www.nih.gov. Published on August 31, 2015. Acessed May 10, 2020
- Ganguly, Prabarna. “Microbes in us and their role in human health and disease”. www.Genome.gov. Published on May 29, 2019. Accessed May 10, 2020.
- “Dog microbiomes closer to humans’ than expected”. Research in Germany, Federal Ministry of Education and Research. www.researchingermany.org. Published on April 20, 2018. Accessed May 11, 2020.
- Trevino, Julissa. “A Surprising Way Dogs Are Similar to Humans.” www. Smithsonianmag.com. Published on April 23, 2018. Accessed February 11, 2020.
- Song, Se Jin, Christian Lauber, Elizabeth K Costello, Catherine A Lozupone, Gregory Humphrey, Donna Berg-Lyons, Gregory Caporaso, et al. “Cohabiting Family Members Share Microbiota with One Another and with Their Dogs.” eLife. eLife Sciences Publications, Ltd. elifesciences.org. Published on April 16, 2013. Accessed May 11, 2020.
- Sriskantharajah, Srimathy. “Ever feel in your gut that you and your dog have more in common than you realized?” www.biomedcentral.com. Published on April 11, 2018. Accessed February 11, 2020.
Use of Transgenic Fish and Morpholinos for Analysis of the Development of the Hematopoietic System
By Colleen Mcvay, Biotechnology, 2021
Author’s Note: I wrote this essay to review the methods of utilizing Zebrafish as a model for understanding the mechanisms underlying the development of blood (hematopoietic) stem cells, for my Molecular Genetics Class. I would love for readers to better understand how the use of transgenic zebrafish and morpholinos have advanced our knowledge on the embryonic origin, genetic regulation and migration of HSCs during early embryonic development.
Introduction
Hematopoietic stem cells, or the immature cells found in the peripheral blood and bone marrow, develop during embryogenesis and are responsible for the constant renewal of blood throughout an organism. Hematopoietic development in the vertebrate embryo arises in consecutive, overlapping waves, described as primitive and definitive waves. These waves are distinguished based on the type of specialized blood cells that are generated and each occurs in distinct anatomical locations (7). In order to visualize and manipulate these embryonic developmental processes, a genetically tractable model must be used. Although many transgenic animals provide adequate models for hematopoiesis and disease study, the zebrafish (Danio rerio) proves to be far superior because of their easily visualized and manipulated embryonic developmental processes (6). Through the use of diagrams and analysis, this discussion will expand upon the mechanisms of the development of hematopoietic stem cells and explain how this knowledge is enriched through the use of transgenic animals and morpholinos, such as the zebrafish.
The Zebrafish Model
The zebrafish (Danio rerio) model has proven to be a powerful tool in the study of hematopoiesis and offers clear advantages over other vertebrate models, such as the mouse (Mus musculus). These advantages include the conserved developmental and molecular mechanisms with higher vertebrates, the optical transparency of its embryo and larvae, the genetic and experimental convenience of the fish, its external fertilization allowing for in vivo visualization of embryogenesis, and its sequential waves of hematopoiesis (9). Additionally, zebrafish allow for clear visualization of the phenotypic changes that occur during the transition from the embryonic to adult stages. This is beneficial in understanding and visualizing the hematopoiesis sequential-wave mechanism, as explained below (8). Mouse models on the other hand are embryonic lethal for many hematopoietic transcription factors, meaning the cells die in the embryo, therefore inhibiting that same visualization (12).
An Overview of Hematopoietic Development
The development of blood in all vertebrates involves two waves of hematopoiesis: the primitive wave and the definitive wave (4). Primitive hematopoiesis, involving an erythroid progenitor (or a cell that gives rise to megakaryocytes and erythrocytes), happens during early embryonic development and is responsible for producing erythroid and myeloid cell populations (5). The primitive wave is transitory, and its main purpose is to produce red blood cells to assist tissue oxygenation. These erythroid progenitor cells first appear in blood islands in the extra-embryonic yolk sac, however, they are neither pluripotent nor do they have renewal ability (11). Later in development (varying points in development for different species), definitive hematopoiesis produces hematopoietic stem and progenitor cells (HSPCs), that generate multipotent blood lineages of the adult organism (7). The HSC’s originate in the aorta-gonad-mesonephros (AGM) region of the developing embryo, where they then migrate to the fetal liver and bone marrow [Figure 1.]
Figure 1: Stages of Embryonic Hematopoiesis
This figure shows the establishment of primitive and definitive hematopoietic stem cells (HSC) during embryonic development. The first HSC’s appear in the blood islands in the extraembryonic yolk sac. The primitive wave is transient, and the successive definitive wave starts intraembryonically in the aorta-gonad-mesonephros (AGM) region. The definitive HSC’s are multipotent and migrate to the fetal liver where they proliferate and seed bone marrow. There is a systematic circulation of the embryonic hematopoiesis.
Hematopoietic Development in the Zebrafish Model:
Like all vertebrates, zebrafish have sequential waves of hematopoiesis. However, hematopoiesis in zebrafish occurs in a distinct manner compared to other vertebrate models, with its primitive HSC’s being generated intra-embryonically, in the ventral mesoderm tissue called the intermediate cell mass (ICM) (2). Throughout this primitive wave, the anterior part of the embryo creates myeloid cells while the posterior creates mostly erythrocytes, both of which circulate throughout the embryo from 24 hours post-fertilization (10). The next step involves hematopoiesis occurring in the aorta-gonad mesonephros (AGM) region, which is followed by the emergence of the HSC’s from the ventral wall of the dorsal aorta. The HSC’s then migrate to the posterior region in the tail called the caudal hematopoietic tissue (CHT). Finally, from 4 days post-fertilization, lymphopoiesis initiates in the thymus and HSC’s move to the kidney marrow (functionally equivalent to bone marrow in mammals) (11) (10)[Figure 2]. Although the anatomical sites of hematopoiesis are different in zebrafish and mammals, the molecular mechanisms and genetic regulation are highly conserved, permitting translation to mammals in many different ways. First, because zebrafish embryos can survive without red blood cells for a long time by passive diffusion, they are ideal for the identification of mutations that would be embryonic lethal in mice (9). These zebrafish mutants have been able to reveal genes that are critical components of human blood diseases and allow for the recognition of toxicity and embryonic lethality at an early stage of drug development. Additionally, the zebrafish model is amenable to forward genetic screens that are infeasible in any other vertebrate model simply due to cost and space requirements. Finally, zebrafish embryos are permeable to water-soluble chemicals, making them ideal for high-throughput screenings of novel bioactive compounds.
A:
B:
Figure 2: Hematopoiesis Development in the Zebrafish Model
A.) In embryonic zebrafish development, the sequential sites of hematopoiesis. Development first occurs in the intermediate cell mass (ICM), next in the aorta-gonad-mesonephros (AGM), and then in the causal hematopoietic tissue (CHT). Later hematopoietic cells are expressed in the thymus and kidney (Modified from Orkin and Zon, 2008).
B.) Timeline for the developmental windows for hematopoietic sites in the zebrafish (Modified from Orkin and Zon, 2008).
Transgenic Zebrafish & Morpholinos to Understand Genetic HSC Regulation and Migration
Transgenic zebrafish and morpholinos are easily manipulated and visualized through microinjection, chemical screening, and mutagenesis, all of which aid in identifying hematopoietic gene mutations and understanding gene regulation and migration in a vertebrate model. Epigenetic analysis of these mutations (through RNA sequencing, CHIP, microarray, and selective inhibition of a gene) have identified critical components to blood development that describe both the functions of these genes within hematopoiesis, and also describe phenotypes associated with defective development (1). Morpholinos target sequences at the transcriptional start site and allow for the selective inhibition of a targeted gene and analysis of the regulatory sequences in this mutant (Martin et al., 2011). Large-scale screening techniques (chemical suppressor screens, etc) of these mutations have identified many small molecules capable of rescuing hematopoietic defects and stopping disease, along with identifying new pathways of regulation (9). Although transgenic organisms have different origin sites and migratory patterns than mammalian hematopoiesis, the genetic regulation of HSC development and lineage specification is conserved, allowing for insights into the pathophysiology of disease.
Conclusion
The zebrafish is an invaluable vertebrate model for studies of hematopoiesis because of its amenability to genetic manipulations and its easily viewed embryonic developmental processes. This organism has become increasingly important in understanding the genetic and epigenetic mechanisms of blood cell development and the information produced is vital for the translation into regenerative medicine applications. Although more research is needed into specifics of HSC differentiation and self-renewal, zebrafish sufficiently allow for newly identified mutations and translocations of human hematopoietic diseases and cancers to be visualized and analyzed, unlike any other model organism. With this analysis, a more complete understanding of the molecular mechanisms of certain hematopoietic diseases can be made, thus aiding in the process of new treatments.
References
- Boatman S, Barrett F, Satishchandran S, et al. Assaying hematopoiesis using zebrafish. Blood Cells Mol Dis 2013;51:271–276
- Detrich H. W et al. (1995). Intraembryonic hematopoietic cell migration during vertebrate development. Proc Natl Acad Sci USA 92: 10713- 10717.
- E. Dzierzak and N. Speck, “Of lineage and legacy: the development of mammalian hematopoietic stem cells,” Nature Immunology, vol. 9, no. 2, pp. 129–136, 2008.
- Galloway J. L., Zon L. I. (2003). Ontogeny of hematopoiesis: examining the emergence of hematopoietic cells in the vertebrate embryo. Curr. Top. Dev. Biol. 53, 139-158
- Kumar, Akhilesh et al. “Understanding the Journey of Human Hematopoietic Stem Cell Development.” Stem cells international vol. 2019 2141475. 6 May. 2019, doi:10.1155/2019/2141475
- Gore, Aniket V et al. “The zebrafish: A fintastic model for hematopoietic development and disease.” Wiley interdisciplinary reviews. Developmental biology vol. 7,3 (2018): e312. doi:10.1002/wdev.312
- Jagannathan-Bogdan, Madhumita, and Leonard I Zon. “Hematopoiesis.” Development (Cambridge, England) vol. 140,12 (2013): 2463-7. doi:10.1242/dev.083147
- de Jong, J.L.O., and Zon, L.I. (2005). Use of the Zebrafish System to Study Primitive and Definitive Hematopoiesis. Annu. Rev. Genet. 39, 481–501.
- Jing, L., and Zon, L.I. (2011). Zebrafish as a model for normal and malignant hematopoiesis. Dis. Model. &Amp; Mech. 4, 433 LP – 438.
- Orkin, Stuart H, and Leonard I Zon. “Hematopoiesis: an evolving paradigm for stem cell biology.” Cell vol. 132,4 (2008): 631-44. doi:10.1016/j.cell.2008.01.025
- Paik E. J., Zon L. I. (2010). Hematopoietic development in the zebrafish. Int. J. Dev. Biol. 54, 1127-1137
- Sood, Raman, and Paul Liu. “Novel insights into the genetic controls of primitive and definitive hematopoiesis from zebrafish models.” Advances in hematology vol. 2012 (2012): 830703. doi:10.1155/2012/830703
The Wood Wide Web: Underground Fungi-Plant Communication Network
By Annie Chen, Environmental Science and Management ’19
Author’s note: When people think of ecosystems, trees and animals usually come to mind. However, most often we neglect an important part of the ecosystem — Fungi. Without us noticing, the fungi stealthily connects the organisms underground, creating a communication network that helps organisms interact with one another.
Picture yourself walking your dog in a quiet, peaceful natural forest, where you imagine the two of you as the only organisms capable of interacting with one another here. However, you are not alone; the plants can communicate and those trees and grasses are always speaking to each other without you taking notice. The conversation between vascular plants in this forest started before any of us are old enough to remember, and will likely continue if this forest is untouched. These conversations between seemingly disconnected organisms have helped this forest survive and thrive to become what you see today. You might wonder, what do these plants talk about, and most importantly, how do the plants communicate if they cannot move freely and have no vocal cords? The secret lies underground in an extensive network.
The underground network connects different immobile creatures to one another. Much like the above-ground biological interactions, the underground ecosystem is diverse, and not only houses many animals, but also consists of roots of different plants, bacteria, and fungal mycelium. The plant roots interact with their immediate neighbors, but in order for plants to communicate with plants further away from them, they rely on the underground fungal network, or according to Dr. Suzanne Simard who popularized the idea, the “Wood Wide Web” (WWW).
What is the underground “Wood Wide Web”, and how is it built?
This communication network is not made up of invisible radio waves like our Wifi, but rather relies on a minuscule and dense fungi network to deliver various signals and information [6]. These fungi, using their branching and arm-like membranes, build a communication network called the mycelium that connects between individual plants, and even the whole ecosystem. The mycelium deliver nutrients, sugar, and water, and in a more complex dynamic with the plants, deliver chemical signals. The fungi’s ability to expand their mycelium through reproduction and growth of fungi individually helps build these connections within the network. To expand their mycelium and link the network together with different individual plants, it must be evolutionarily advantageous for the fungi species to create such an extensive system. That is where the plant roots and their cooperative interactions come into play.
This communication network builds upon the foundation of mutualistic relationships between plants and fungi called mycorrhizae. Mutualism is the relationship that allows plants to provide sugars for the fungi in exchange for limiting nutrients such as phosphorus, nitrogen, and sometimes water (figure 1). According to an article published by Fleming, around 80-90% of the earth’s vascular plants have this mutualistic relationship, which allows plants and fungi to connect with one another through the plant roots. Without the mutually beneficial relationship, the fungi are not obligated to expand their network to connect to the plant roots and “help” these plants deliver chemical signals.
Not only is there nutrient and informational exchange, but the plants benefit from fungi priming, where the initial fungi infection that creates the exchange interface between plant roots and fungi cells force the plant immune systems to increase immunity. The increased immunity in the infected plant indirectly increases the chances of the plants in resisting major changes such as a disease in the ecosystem [6]. This continuous plant-fungi network through nutrient exchange and strengthening each species’ survival connects the whole ecosystem together.
Figure 1: A simplified visual of species interactions within the fungal network.
(Source: BBC)
That being said, the plant and fungal species that make up the WWW can vary by the participants who built the ecosystems. The interaction also means that the plants can selectively provide carbon or release defense chemicals to decide which fungus remains and has a mutualistic relationship with them [1]. When introducing a non-native species, it can alter the new ecosystem by encouraging different types of mycorrhiza. One such example was the introduction of European cheatgrass in Utah, U.S.A. The mycorrhizal makeup in Utah initially does not have significant changes prior to the introduction. However, upon European cheatgrass introduction to the Utahn site, despite the cheatgrass that does not contain European fungi, the site showed a shifted fungi genetic makeup [8]. Each plant individual, or species, using their preferences and abilities to “choose” their mutualistic partners, can diversify the fungal network to become more extensive and powerful, both to benefit and to harm other species of the ecosystem. The interspecies perspective is important in understanding the WWW.
Plants talk and interact through the “Wood Wide Web”
The communication extends to others in the ecosystem — the plants can “speak” to each other interspecifically, too. The individuals in an ecosystem are closely linked to one another, and so are relationships between plant individuals, whether it is directly with each other, indirectly through the fungal network, or both. The indirect communication relies on the fungal network, where various chemical signals pass through. For instance, the increased phosphorus level in the soil signals other plants that there is a plant-fungal interaction, and they may respond to this signal in different ways to ensure the situation is to their advantage — they could try to have their share of nutrients by producing sugars to attract these types of fungi, or they could make their plant competitors less healthy by excreting chemicals to weaken the fungus’ abilities to provide nutrients [13]. The WWW provides an internet that allows the plants to select a variety of methods to interact with one another, near or far.
The plants can choose to actively help each other through this fungal network, and allow both individuals, or species, to thrive in the ecosystem. Evolutionarily speaking, a plant individual could benefit from their own kind to thrive for the benefit of their survival. When an individual plant is thriving and producing excess carbon, they can help other plants by transferring excess nutrients through the fungal network [6]. An older, dying tree can also choose to transfer its resources to the younger neighbors through the fungal network, or donate its stored nutrients to the entire ecosystem through the decaying process that is aided by the extensive fungal network from fungal hyphae growth over the material [5]. Furthermore, through the WWW, the plants are able to communicate with one another about the possible threats including herbivores and parasitic fungi. In the research of Song et al, tomato plants infected with pathogens are able to send various defensive chemical signals, such as enzymes, into the existing fungal network for healthy neighbors in the network, warning them of the dangers nearby before they are infected themselves; using this mechanism, the plants can concentrate defensive chemicals with neighbors to minimize the spreading of this parasitic fungus in the area.
Not only can plants benefit one another, they can also use this network to put others at a disadvantage, such as to wipe out another competing or predating species that threaten their own survival. Allelopathy, or the exuding of chemicals to ward off enemies, usually gives off the impression that the plants use this method to discourage herbivores from consuming them, such as the milky sap that causes skin rashes and inflammation when a cucumber vine is cut, but the allelopathy is also active underground through the WWW. Barto et al, through their research on allelopathy, shows that even within a disturbed habitat, when there is competition between plant species, one species may utilize the regional network of fungi attached to them to deliver allelochemicals from one plant species to its neighboring species, preserving the fitness of their own kind.
Passive Animals, Active Plant and Fungi
We always think of herbivores as active players impacting the ecosystem. In the WWW, they are the last to respond to changes. Plants and fungi signal each other when an herbivore is present in the network, well before it has established its presence in the neighboring plants. Fungi are an important and active part of this ecosystem because they can also choose to exclude herbivores through chemical allelopathy. While it is possible that the fungi can choose to colonize a separate species that provides more benefits for them, they can concentrate their energy on defending its current host. Before the herbivore can expand its population, the plants have already communicated with one another through excretion of allelopathic chemicals, not only to ward off the herbivores that are causing potential damage, but also to warn other plants of the herbivore presence [1]. The fungi colonization of two nightshade species, Solanum ptycanthum and Solanum dulcamara, showed an increase of defense protein levels against the feeding caterpillars. This is just one example of herbivory defense mechanisms that results in decreased predator fitness, specifically in reduced growth rate and feeding rates [11]. When caterpillars feed on the Solanum spp., the active players in this relationship, the fungi and their plant hosts use chemical defense mechanisms indirectly induced by the fungi to discourage herbivores from feeding, and through evolution, eventually they drive out predators who are disadvantageous to the fungi-plant fitness.
Alone without the Wood Wide Web: Human Impacts
The network is built on a web of hyphae connections that is barely visible to the human eye, and even more vulnerable to changes. Older ecosystems not only have a higher percentage of larger trees with broader root systems, but are also denser in number, which both lead to a more extensive mycorrhizal fungal network. The species diversity, on top of age and density, contributes to a complex and healthy WWW that supports all plants in the ecosystem [3]. However, a disturbed ecosystem severs the connections in this network, making the previously extensive system difficult to repair.
Human activities that disturb the soil can affect this fragile yet powerful connection: seasonal tilling in agriculture, intensive logging, and change of soil chemistry and structure by laying concrete inhibits the soil from building an extensive web. Physically turning and chemically altering the soil is a direct human impact that cuts off hyphae connections between plant individuals in the system. According to Dr. Simard’s statement in a Biohabitat interview, the urban plants are less healthy because they lack the WWW to help them thrive through nutrient, water, and chemical signal exchange; they must do all those things or rely on humans to provide these needs. Indirectly, the larger-scale deaths and removal of plant individuals from logging also no longer foster a healthy mutualistic relationship between plants and fungi. If an individual plant is prevented from connecting with its mutualistic partners, whether that is through disturbance in the soil or the death of these partners, prevent the extensiveness of the WWW, and the isolation makes the urban tree population vulnerable to diseases if the humans do not diligently maintain them.
It is true that the smaller versions of the WWW still develop between periods of disturbance, proven indirectly by the fungal colonization ability in off-site lab experiments such as those included in the studies of Barto et al and Hawkes et al. However, important interspecies collaborations and perhaps lack thereof that is missing compared to a minimally disturbed habitat functions much better in resisting climate change and increased foreign and invasive species that threaten the health of the ecosystem.
Fortunately, despite the growing demand of land produced by economic growth and population, there is an increased awareness of the importance of plants and the health of the ecosystem. Over the last two decades, the addition of policies and practices indicated that major western conservation agencies have started to take on an interspecies perspective. One notable example is the inclusion of ecosystem management in the Clean Water Act, which adapts to the notion that endangered flora or fauna species is dependent on the health of an ecosystem [14]. The increased understanding of how interconnected the flora species are, in addition to conservation methods that have existed before the western colonization, have changed how governments aim to preserve nature.
Regardless of the level of human impacts, the WWW holds important communication between plants, fungi, and herbivores through chemical signals and nutrient exchange to sustain or to outcompete each other. The connectivity to relay information within this network is key to the healthy plant community, and further the health of the ecosystem. Next time when you walk your dog in the woods, remember that the plants around you are capable of communicating thanks to this underground network. In order to keep this forest healthy for generations on, it is up to us to rethink development strategies to preserve this network that helps them thrive to continue the species’ communication in the WWW in this forest.
References
- Biere, Arjen, Hamida B. Marak, and Jos MM van Damme. “Plant chemical defense against herbivores and pathogens: generalized defense or trade-offs?.” Oecologia 140.3 (2004): 430-441.
- Barto, E. Kathryn, et al. “The fungal fast lane: common mycorrhizal networks extend bioactive zones of allelochemicals in soils.” PLoS One 6.11 (2011): e27195.
- Beiler, Kevin J., et al. “Architecture of the wood‐wide web: Rhizopogon spp. genes link multiple Douglas‐fir cohorts.” New Phytologist 185.2 (2010): 543-553.
- Belnap, Jayne, and Susan L. Phillips. “Soil biota in an ungrazed grassland: response to annual grass (Bromus tectorum) invasion.” Ecological applications 11.5 (2001): 1261-1275.
- Biohabitats. “Expert Q&A: Suzanne Simard.” Biohabitats Newsletter 14.4 (2016).
- Fleming, Nic. “Plants talk to each other through a network of fungus.” BBC Earth. (2014).
- Gehring, Catherine, and Alison Bennett. “Mycorrhizal fungal–plant–insect interactions: the importance of a community approach.” Environmental entomology 38.1 (2009): 93-102.
- Hawkes, Christine V., et al. “Arbuscular mycorrhizal assemblages in native plant roots change in the presence of invasive exotic grasses.” Plant and Soil 281.1-2 (2006): 369-380.
- Hawkes, Christine V., et al. “Plant invasion alters nitrogen cycling by modifying the soil nitrifying community.” Ecology letters 8.9 (2005): 976-985.
- Macfarlane, Robert. “The Secrets of the Wood Wide Web.” The New York Times. (2016).
- Minton, Michelle M., Nicholas A. Barber, and Lindsey L. Gordon. “Effects of arbuscular mycorrhizal fungi on herbivory defense in two Solanum (Solanaceae) species.” Plant Ecology and Evolution 149.2 (2016): 157-164.
- Song, Yuan Yuan, et al. “Interplant communication of tomato plants through underground common mycorrhizal networks.” PloS one 5.10 (2010): e13324.
- Van der Putten, Wim H. “Impacts of soil microbial communities on exotic plant invasions.” Trends in Ecology & Evolution 25.9 (2010): 512-519.
- Doremus, H., Tarlock, A. Dan. “Can the Clean Water Act Succeed as an Ecosystem Protection Law?” George Washington Journal of Energy and Environmental Law 4 (2013): 49.
The History and Politics of Marijuana in the United States
By Vishwanath Prathikanti, Political Science, 23’
Author’s note: Marijuana today is a very controversial topic, with some arguing for a complete criminalization of it, others advocating for complete decriminalization of it, and many more in between. To understand marijuana today, and what it does to your body, we need to unravel its complex history and proven effects. Unfortunately, this is not always clear cut.
Marijuana, according to the National Institute on Drug Abuse (NIDA) is defined as “a greenish-gray mixture of the dried flowers of Cannabis sativa” and all other relatives, including Cannabis indica, Cannabis ruderalis, and hybrids [1]. The mind-altering effects of marijuana are mainly due to the chemical delta-9-tetrahydrocannabinol, commonly known as THC, and it is the active ingredient that makes marijuana so dangerous according to NIDA.
The reason for this mind-altering effect is due to the structural similarities between THC and anandamide, a naturally occurring cannabinoid that functions as a neurotransmitter. Anandamide is an agonist, meaning it is a chemical that binds to receptors in the human endocannabinoid system and causes a response. The endocannabinoid system is a network of receptors designed to maintain bodily homeostasis, or stable conditions for the body [13]. Anandamide activates receptors which send chemical messages between nerve cells throughout the nervous system, specifically to areas in the brain “that influence pleasure, memory, thinking, concentration, movement, coordination, and sensory and time perception” [1]. Mainly, the cerebellum, basal ganglia, and hippocampus areas are being activated the most. Due to the effects of pleasure, researchers have speculated that anandamide could be released in the early phases of brain development to strengthen positive reactions towards food consumption, essentially encouraging people to eat, although they cannot confirm it [2]. Regardless, due to its similarity to anandamide, THC is able to attach cannabinoid receptors on neurons in these brain areas and activate them. These receptors are usually activated by anandamide, but when they are activated by THC, various mental and physical functions become disrupted, such as cognitive function and balance, and various mental effects start to take place as well. These effects most commonly represent themselves as “pleasant euphoria” and other times “heightened sensory perception (e.g., brighter colors), laughter, altered perception of time, and increased appetite” [1].
Repeated interference of the endocannabinoid system due to THC can lead to various problems in areas of the brain that enable complex thinking, balance, and reaction time. THC has also been shown to have more permanent effects on developing brains, such as impairing specific learning and memory tasks, discussed in one study involving rats. Gleason et al. targeted cannabis receptor 1 (CB1), an endocannabinoid receptor that is prevalent “in the cortex, hippocampus and striatum” and is activated by THC in both mice and humans [3]. Gleason et al. administered a CB1 agonist during adolescence in one group, adulthood in another, and proceeded to observe brain development. They found that the adolescents developed long-term hippocampal learning and memory deficits, specifically manifesting in hippocampal long-term depression. Hippocampal depression is the process of reducing synaptic efficacy in the hippocampus due to a patterned behavior, in this case marijuana usage, resulting in the process of learning new information to become more difficult [17]. The adult rats in this study did not show signs of these changes. When it comes to humans though, studies show conflicting results.
Most of the consistently observed damage done to the human brain is within the prefrontal cortex area, due to the large volume of CB1 receptors. One study that observed adults who used marijuana four times per week compared to adults of a similar demographic that did not use marijuana found that the frequent use resulted in lower volumes of orbitofrontal cortex (OFC) gray matter [10]. The OFC is tied to various aspects of decision making and is widely believed to deal with emotional and sensory details linked to decisions. The study explains that the OFC “is enriched with CB1 receptors, and is highly implicated in addictive behaviors such as those related to disruptions in motivation” [10]. The consequences of losing gray matter are not only linked to suboptimal decision-making, but in order to compensate for the loss of volume, the brain builds more connections in the OFC. Due to the higher amount of connections, the brain requires more sustenance, particularly glucose, in order to function normally.
Despite the existence of marijuana in the world for centuries, marijuana research is still in its infancy. This is because the federal Drug Enforcement Agency (DEA) lists marijuana as a Schedule I drug, meaning “it has a high potential for abuse, no currently accepted medical use in treatment in the United States, and a lack of accepted safety for use under medical supervision” [5]. To put things into perspective, heroin, ecstasy and methamphetamine are also listed as Schedule I drugs. To study the effect on THC, researchers not only need a special permit, but they need to study individuals who are already consuming THC on a regular basis. This is a practice used for all Schedule I drugs; scientists are unable to administer heroin, methamphetamine or ecstasy to individuals in any research projects.
While THC has been shown to have various adverse effects on the brain, there is another component in marijuana that has resulted in various positive effects, cannabidiol (CBD). CBD is the second most prevalent ingredient in marijuana, and is an essential component in medical marijuana. While THC is the main psychoactive ingredient in cannabis, responsible for physical and mental disorientation, CBD is responsible for the sense of anxiety relief [4]. Until 2018, CBD was also considered a Schedule I drug, but it has since been removed due to its benignity, meaning anyone can legally buy and sell CBD in the US as long as it is derived from hemp [4]. Hemp is another plant in the cannabis family, but has different physical and chemical properties that separate it from marijuana. Hemp is typically characterized by its sturdy stalks used in textiles as well as a low concentration of THC [16]. In particular, CBD has been essential in treating “some of the cruelest childhood epilepsy syndromes, such as Dravet syndrome and Lennox-Gastaut syndrome (LGS), which typically don’t respond to antiseizure medications” [4]. There is evidence CBD helps with numerous mental disorders, including anxiety, insomnia and chronic pain, but further research investigating side effects is required.
Returning to the hippocampus region, Demirakca et al. observed hippocampal volume reductions in users who consumed cannabis that was higher in THC content than CBD content. They concluded that there was an inverse relationship between gray matter (GM) volume and THC/CBD ratio, meaning the more THC is in a product, the more likely it is that the user will have reduced GM volume [19]. However, study spanning three years conducted by Koenders et al. found that there was no relevant correlation between cannabis usage in young adults and reduced GM volumes in multiple regions of the brain, including the hippocampus [19].
CBD in particular, has been shown to have negative effects on the amygdala, whereas THC has not shown an effect. The amygdala is popularly known as being the collection of nuclei that control your sense of fear, linking it directly to a potential effect of THC usage, anxiety [18]. Surprisingly, a link between THC and the amygdala has been disproven by multiple papers. However, CBD does affect the amygdala. One study by Rocchetti et al. in 2013 and another by Pagliaccio et al. in 2015 disproved previously held beliefs linking marijuana and amygdala damage due to size. Rocchetti et al. conducted a smaller study and proved publication bias in the preceding amygdala research while Pagliaccio produced a wide study that found variation of amygdala size between marijuana users and nonusers fell within the range of normal variation [11, 14]. However, while amygdala size may not be affected, there is evidence that indicates its functionality is decreased. A study done by Fusar-Poli et al. in 2010 found that CBD actually negatively affects the amygdala due to its anti-anxiety effects. Fusar-Poli et al. found that CBD weakens signals sent from the amygdala during fear-inducing situations, meaning that CBD usage could prompt a delayed or subdued reaction compared to normal [15].
This begs the question, if further research is required to understand the full effects of THC and CBD, why is marijuana considered such a charged political topic today? To understand marijuana today, it is necessary to examine its history, often motivated for political reasons rather than scientific ones.
The history of regulation and fear associated with marijuana dates back to 1930 when Harry Anslinger was appointed as the first commissioner of the Federal Bureau of Narcotics (FBN). Due to the great depression, the federal government needed to cut funding for various agencies, and at the time, the FBN mainly combated heroin and cocaine, drugs used by a relatively small number of people. Anslinger needed to find a more widely used narcotic that would get him the necessary funds to continue his war on drugs [6].
Anslinger decided that the drug would be cannabis, and latched on to a story of a man named Victor Licata who killed his family with an Axe after consuming cannabis. While there was no evidence that he consumed marijuana before the killing, newspapers quickly sensationalized the story, and Anslinger went onto various radio shows claiming that the drug could cause insanity. The name “marijuana” was actually given by Anslinger to associate the drug with latinos, and used racial tensions to insinuate that the drug made black and latino americans “forget their place in the fabric of society” [6]. Anslinger’s actions culminated in his testifying before congress in hearings regarding the Marihuana Tax Act of 1937 which effectively banned sales.
Before it was categorized as a Schedule I drug however, there were numerous committees that ruled against the illegalization of marijuana. After the Marihuana Tax Act of 1937, New York City mayor Fiorello LaGuardia assembled a special committee with the New York Academy of Medicine “to make a thorough sociological and scientific investigation” on the effects of marijuana. The committee completely disproved Aslinger’s claims of insanity, saying “the basic personality structure of the individual does not change,” meaning after the “high” has passed, the user is unchanged personality-wise. They also stated that marijuana “does not evoke responses which would be totally alien [in an] undrugged state,” meaning the consumption of marijuana would not cause an individual to act out of character [7]. The information provided in the LaGuardia Report is quite consistent with research found today, and helped contribute to the supreme court overturning the Marihuana Tax Act in 1969 with Leary vs United States.
However, while the act was overturned, it was replaced soon after with the Controlled Substances Act 1971 under the Nixon presidency which established Marijuana as a schedule I drug. This act was another racially motivated law, with Nixon’s domestic Policy Advisor later admitting that “the Nixon White House had two enemies: the antiwar left and black people. We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did” [8].
Nixon appointed a commission to investigate the effects of marijuana in 1972, and the report returned recommending the decriminalization of marijuana. The report argued that “Criminal law is too harsh a tool to apply to personal possession even in the effort to discourage use…It implies an overwhelming indictment of the behavior which we believe is not appropriate. The actual and potential harm of use of the drug is not great enough to justify intrusion by the criminal law into private behavior, a step which our society takes only with the greatest reluctance” [9]. In response to the report, Nixon not only refused to decriminalize marijuana, but created the DEA the next year, which would enforce the laws regarding marijuana as a Schedule I drug.
Because of the strict regulations on it today, marijuana research is still in its infancy. However, due to the aforementioned loosening of regulations on CBD research, marijuana is being pushed into the spotlight once again. This, coupled with marijuana’s past history and present realities of it being tied to members of the African American communities have made the marijuana debate into a hotbed for political discourse. Whether or not recreational marijuana becomes legal federally in the near future, it is clear that marijuana warrants further investigation in order to clear up the various inconsistencies surrounding its effects, both long-term and short-term, on the human body.
References
- National Institute on Drug Abuse research report series “Marijuana” https://www.drugabuse.gov/publications/research-reports/marijuana/what-marijuana
- Stephen V. Mahler, et. al “Endocannabinoid Hedonic Hotspot for Sensory Pleasure: Anandamide in Nucleus Accumbens Shell Enhances ‘Liking’ of a Sweet Reward.” Neuropsychopharmacology. 2007. https://www.nature.com/articles/1301376
- Kelly A. Gleason, et al. “Susceptibility of the adolescent brain to cannabinoids: long-term hippocampal effects and relevance to schizophrenia” Nature. 2012. https://www.nature.com/articles/tp2012122
- Peter Grinspoon, “Cannabidiol (CBD) — what we know and what we don’t” Harvard Health Blog, Harvard Health Publishing. August 24, 2018. https://www.health.harvard.edu/blog/cannabidiol-cbd-what-we-know-and-what-we-dont-2018082414476
- DEA “Drugs of Abuse (2017 edition)” 2017. https://www.dea.gov/sites/default/files/drug_of_abuse.pdf
- CBS News “The man behind the marijuana ban for all the wrong reasons” Nov. 17, 2016 https://www.cbsnews.com/news/harry-anslinger-the-man-behind-the-marijuana-ban/
- The LaGuardia Committee Report https://daggacouple.co.za/wp-content/uploads/1944/04/La-Guardia-report-1944.pdf
- Drug Policy Alliance “Top Adviser to Richard Nixon Admitted that ‘War on Drugs’ was Policy Tool to Go After Anti-War Protesters and Black People” http://www.drugpolicy.org/press-release/2016/03/top-adviser-richard-nixon-admitted-war-drugs-was-policy-tool-go-after-anti
- Schafer library of drug policy, “Marihuana: A Signal of Misunderstanding” http://www.druglibrary.org/schaffer/Library/studies/nc/ncmenu.htm
- Francesca M. Filbey, et al. “Long-term effects of marijuana use on the brain” PNAS. 2014. https://www.pnas.org/content/111/47/16913
- David Pagliaccio et al. “Shared Predisposition in the Association Between Cannabis Use and Subcortical Brain Structure” JAMA Psychiatry. October 2015. https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2429550
- Albert Batalla et al. “Structural and Functional Imaging Studies in Chronic Cannabis Users: A Systematic Review of Adolescent and Adult Findings” PLOS One. February 2013. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0055821
- Cannabis research initiative “Human endocannabinoid system” UCLA Health. https://www.uclahealth.org/cannabis/human-endocannabinoid-system
- Matteo Rochetti et al. “Is cannabis neurotoxic for the healthy brain? A meta‐analytical review of structural brain alterations in non‐psychotic users” Psychiatry and Clinical Neurosciences. Sept. 2013. https://onlinelibrary.wiley.com/doi/epdf/10.1111/pcn.12085
- Paolo Fusar-Poli et al. “Modulation of effective connectivity during emotional processing by Δ9-tetrahydrocannabinol and cannabidiol” International Journal of Neuropsychopharmacology. May 2010. https://academic.oup.com/ijnp/article/13/4/421/712253
- NIDA Blog. “What is Hemp?” National institute on drug abuse for teens. November 2015. https://teens.drugabuse.gov/blog/post/what-is-hemp
- Peter Massey et al. “Long-term depression: multiple forms and implications for brain function” Cell. April 2007. https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(07)00043-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0166223607000434%3Fshowall%3Dtrue
- Nicole Haloupek “What is the Amygdala?” Live Science. January 2020. https://www.livescience.com/amygdala.html
- Traute Demirakca et al. “Diminished gray matter in the hippocampus of cannabis users: Possible protective effects of cannabidiol” April 2011. https://www.sciencedirect.com/science/article/pii/S0376871610003364?via%3Dihub
- Laura Koenders et al. “Grey Matter Changes Associated with Heavy Cannabis Use: A Longitudinal sMRI Study” May 2016. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4880314/
Not All Heroes Wear Capes: How Algae Could Help Us Fight Climate Change
By Robert Polon, Biological Sciences Major, ’21
Author’s Note: In my UWP 102B class, we were assigned the task of constructing a literary review on any biology-related topic of our choice. A year ago, in my EVE 101 class, my professor briefly mentioned the idea that algae could be used to sequester atmospheric carbon dioxide in an attempt to slow the rate of climate change. I found this theory very interesting, and it resonated with me longer than most of the other subject matter in that class. I really enjoyed doing the research for this paper, and I hope it gives people some hope for the future. I’d like to thank my UWP professor, Kathie Gossett, for pointing me in the right direction throughout the process of writing this paper.
Abstract
With climate change growing ever more relevant in our daily lives, scientists are working hard to find solutions to slow and reverse the damage that humans are doing to the planet. Algae-based carbon sequestration methods are a viable solution to this problem. Photosynthesis allows algae to remove carbon dioxide from the atmosphere and turn it into biomass and oxygen. It has even been proposed that raw algal biomass can be harvested and used as a biofuel, which can provide a greener alternative to fossil fuel usage. Though technology is not yet developed enough to make this change in our primary fuel source, incremental progress can be taken to slowly integrate algal biofuel into daily life. Further research and innovation on the subject could allow full-scale replacement of fossil fuels with algal biofuel to be a feasible option. Methods of algal cultivation include open-ocean algal blooms, photo bioreactors, algal turf scrubbing, and BICCAPS (bicarbonate-based integrated carbon capture and algae production system). There are many pros and cons to each method, but open-ocean algal blooms tend to be the most popular because they are the most economical and produce the most algae, even though they are the most harmful to the environment.
Keywords
Algae | Biofuel | Climate Change | Carbon Sequestration
Introduction
As we get further into the 21st century, climate change becomes less of a theory and more of a reality. Astronomically high post-Industrial Revolution rates of greenhouse gas emissions have started to catch up with humans, as the initial consequences of these actions are now coming to light with fear that worse is on the way. Many solutions have been proposed to decrease greenhouse gas emissions, but very few involve fixing the damage that has already been done. It has been proposed that growing algae in large quantities could help solve this climate crisis.
According to the Environmental Protection Agency, 76% of greenhouse gas emissions come in the form of carbon dioxide. As algae grows, it removes carbon dioxide from the atmosphere by converting it to biomass and oxygen via photosynthesis. Algae convert carbon dioxide to biomass at relatively fast rates. On average, one kilogram of algae utilizes 1.87 kilograms of CO2 daily, which means that one acre of algae utilizes approximately 2.7 tons of CO2 per day [1]. For comparison, one acre of a 25-year-old maple beech-birch forest only utilizes 2.18 kilograms of CO2 per day [2]. This amount of carbon dioxide sequestration can be done by only 1.17 kilograms of algae. After its photosynthetic purpose has come to an end, the raw algal biomass can be harvested and used as an environmentally-friendly biofuel. This literary review will serve as a comprehensive overview of the literature on this proposal to use algae as a primary combatant against global warming.
Carbon Dioxide
For centuries, heavy usage of fossil fuels has tarnished Earth’s atmosphere with the addition of greenhouse gases [3]. These gases trap heat by absorbing infrared radiation that would otherwise leave Earth’s atmosphere. This increases the overall temperature of the earth, which leads to the melting of polar ice caps, rising sea levels, and strengthening of tropical storm systems, among many other devastating environmental effects [4]. The most commonly emitted greenhouse gas, carbon dioxide, tends to be the primary focus of global warming treatments.
These algal treatment methods are no different. Any algal treatment option is dependent upon the fact that algae sequester atmospheric carbon dioxide through photosynthesis. It converts carbon dioxide into biomass and releases oxygen into the atmosphere as a product of the photosynthetic process [5].
Algal Cultivation
There are four proposed methods of algal cultivation: open-ocean algal blooms, photobioreactors, algal turf scrubbing, and BICCAPS. These techniques all differ greatly, with various benefits and drawbacks to each.
Open-Ocean Algal Blooms
Algae is most abundant on the surface of the open ocean. With the addition of its limiting nutrient, iron, in the form of iron(ii) sulfate (FeSO4), massive algal blooms can be easily sparked anywhere in the ocean [3]. This seems to be the way that most scientists envision sequestration because, of all proposed cultivation techniques, this one produces the most algae in the least amount of time. Intuitively, this method removes the most carbon dioxide from the atmosphere, as the amount of CO2 removed is directly proportional to the quantity of algae undergoing photosynthesis.
There are many benefits to open-ocean algal blooms. There is no shortage of space on the surface of the ocean, so, hypothetically, there is a seemingly infinite amount of algal mass that can be cultivated this way. This technique is also very cost-efficient, as all you need to employ it is some iron(ii) sulfate and nature will do the rest [3].
Once the algal bloom has grown to its maximum size, there is an overabundance of algal biomass on the surface of the ocean. Some researchers have proposed that this mass be collected and used as a biofuel [5,6,7]. Others have proposed that we let nature play its course and allow the dead algae to sink to the bottom of the ocean. This ensures that the carbon dioxide it has taken out of the atmosphere is stored safely at the bottom of the ocean [8]. Here, the algal biomass is easily accessible for consumption by shellfish, who store the carbon in their calcium carbonate shells [3].
This solution is not an easy one to deploy, however, because algal blooms bring many problems to the local ecosystems. Often referred to as harmful algal blooms (HABs), these rapidly growing algae clusters are devastating to the oceanic communities they touch. They increase acidity, lower temperature, and severely deplete oxygen levels in waters they grow in [9]. Most lifeforms aren’t prepared to handle environmental changes that push them out of their niches, so it’s easy to see why HABs kill significant portions of marine life.
HABs can affect humans as well. Many species of alga are toxic to us, and ingestion of contaminated fish or water from areas affected by these blooms can lead to extreme sickness and even death. Some examples of these diseases are ciguatera fish poisoning, paralytic shellfish poisoning, neurotoxic shellfish poisoning, amnesic shellfish poisoning, and diarrheic shellfish poisoning [10]. The effects of harmful algal blooms have only been studied in the short-term, but from what we have seen, they are definitely a barrier in using this form of algae cultivation [11].
Photobioreactors
Photobioreactors are another frequently-proposed tool for cultivating algae. These artificial growth chambers have controlled temperature, pH, and nutrient levels that make for optimal growth rates of algae [12]. They can also run off of wastewater that is not suitable for human consumption. Photobioreactors minimize evaporation and, with the addition of iron, magnesium, and vitamins, increase rates of carbon dioxide capture are increased [1]. Due to the high concentration of algae in a relatively small space, photobioreactors have the highest rates of photosynthesis (and subsequently carbon dioxide intake) out of all of the cultivation methods mentioned in this paper.
This innovative technology was driven primarily by the need to come up with an alternative to triggering open-ocean algal blooms. Photobioreactors eliminate pollution and water contamination risks that are prevalent in harmful algal blooms. Furthermore, they make raw algal biomass easily accessible for collection and use as a biofuel, which open-ocean algal blooms do not [12].
The main drawback to this method is that the cost of building and maintaining photobioreactors is simply too high to be economically feasible right now [12]. Technological developments need to be made to lower the short-term cost of operation and allow for mass production if we want to use them as a primary source of carbon sequestration. Their long-term economic feasibility still remains unknown, as most of the cost is endured during the production of the photobioreactors. Money is made back through the algae cultivated, but the technology hasn’t been around long enough to show concrete long-term cost-benefit analyses without speculation [14].
Algal Turf Scrubbing (ATS)
Proposed in 2018 by botanist Walter Adey, algal turf scrubbing (ATS) is a new technique created to efficiently cultivate algae for use in the agriculture and biofuel industries. The process involves using miniature wave generators to slightly disturb the flat surface of a floway and stimulate the growth of numerous algal species in the water. Biodiversity in these floways increases over time, and a typical ATS floway will eventually have over 100 different algae species [11].
Heavy metals and other toxic pollutants occasionally make their way into the floways; however, they are promptly removed, to ensure that the product is as nontoxic as possible. The algal biomass is harvested bi-weekly and has a variety of uses. Less toxic harvests can be used as fertilizers in the agricultural industry, which research claims is the most economically efficient use for the harvest. It can also go towards biofuel use, although the creators of the ATS system believe the majority of their product will go towards agricultural use because they will not be able to produce enough algae to keep up with the demand (if our society moves towards using it as a biofuel) [11].
The problems with ATS are not technological, but sociopolitical, as the research team behind it fears that they will not get the funding and resources needed to perform their cultivation at an effective level [11].
BICCAPS
The bicarbonate-based integrated carbon capture and algae production system (BICCAPS) was proposed to reduce the high costs of commercial algal biomass production by recycling bicarbonate that is created when algae capture carbon dioxide from the atmosphere and using it to culture alkalihalophilic microalgae (algae that thrive in a very basic pH above 8.5). Through this ability to culture more algae, the system should, in theory, cut costs of carbon capture and microalgal culture. It is also very sustainable, as it recycles nutrients and minimizes water usage. The algae cultivated can also be turned into biofuel to lower fossil fuel usage [13].
The main drawback to this closed-loop system is that it does not cultivate as much algae as the other systems, though work is currently being done to improve this. It has been proven that the efficiency of BICCAPS significantly improves with the addition of sodium to the water, which stimulates the growth of alkalihalophilic microalgae [13]. This means that, with a little bit of improvement to the efficiency of the system, BICCAPS could become a primary algal biomass production strategy because of its low cost and sustainability.
Use of Algae as a Biofuel
While algae may not have the energetic efficiency of fossil fuels, it is not far behind. It can be burned as a biofuel to power transportation, which would allow us to lower our use of fossil fuels and, subsequently, our greenhouse gas emissions. When dry algal biomass is burned, it releases more oxygen and less carbon dioxide than our current fuel sources. The increase in oxygen released into the atmosphere not only helps to lower CO2 emissions but increases the overall atmospheric ratio of oxygen to carbon dioxide. More research still has to be done to find the best possible blend of algal species for fuel consumption [12]. Solely using algae as a biofuel would not meet the world’s energy demand, but the technology for photobioreactors continues to improve, giving hope to one day use algae more than fossil fuels [6].
A common counterargument to proposals for algal biofuel usage is that burning dry algae only provides half the caloric value of a similarly-sized piece of coal. While this is true, it should be taken into consideration that that coal has an extraordinarily high caloric value and that the caloric value of algae is still considered high relative to alternative options [3].
It is often suggested that bioethanol, which essentially uses crops as fuel, should be used over algal biofuel. The main problem with this proposal is that farmers would spend more time cultivating inedible crops because they make for better fuel. This would lead to food shortages on top of the current hunger problem in our world. Farming crops also take up land, while growing algae does not [7].
Drawbacks
The main problems associated with using algae as a biofuel are technological and economical. We simply do not have the technology in place right now to produce enough algae to completely replace fossil fuels with it. In order to do this, we would have to establish full-scale production plants, which is not as economically viable as simply continuing to use the fossil fuels that degrade our planet [12]. Receiving funding for the commercialization of algae is the biggest obstacle this plan faces. It’s difficult to get money allocated to environmental conservation efforts because, unfortunately, it doesn’t rank very highly in our government’s priorities. Algal carbon sequestration has also never been observed at a commercial scale, so there is hesitation to fully commit resources to something that seems like a gamble.
Alternative Uses
It has also been proposed that algal biomass grown to sequester carbon dioxide should be used in the agricultural industry. As previously mentioned, the creators of ATS have suggested using it as a fertilizer [11]. Others say that it can be used to feed livestock or humans, as some cultures actually consume algae already [12]. The seemingly infinite supply of microbes can also be harvested and used heavily in the medical industry in the form of antimicrobial, antiviral, anti-inflammatory, anti-cancer, and antioxidant treatments [7].
Conclusion
Algae can be used to fight climate change because it removes carbon dioxide from our atmosphere, stores it as biomass, and replaces it with oxygen. Arguments have been made in many directions over the best method of algal cultivation. Triggering open-ocean algal blooms is certainly the most cost-efficient of these methods, and it produces the most algal biomass. The problem with using this technique is that these algal blooms have devastating ecological effects on the biological communities they come in contact with. Photobioreactors are another popular method among those who favor this strategy because of their ability to efficiently produce large quantities of algae; however, the main inhibition to their usage is the extremely high cost of construction and operation. With more focus on developing lower cost photobioreactors, they can potentially become the primary source of algal growth. Algal turf scrubbing is another strategy of algae cultivation that struggles with the problem of acquiring adequate funding for the operation. BICCAPS is a relatively inexpensive and eco-friendly way to grow algae in a closed system, but it yields low quantities of algal biomass compared to the other systems.
The raw algal biomass from these growth methods can potentially be used as a biofuel. Dry alga has a high caloric value, which makes it great for burning to power equipment. It does not burn as well as fossil fuels, but it does release more oxygen and less carbon dioxide than fossil fuels when burned. Of course, funding will be needed for increased algae production to make this a possibility, but with more research and advances in the field, algal growth would be a great way to remove large amounts of carbon dioxide that is stuck in Earth’s atmosphere and become our primary fuel source down the line.
References
- Anguselvi V, Masto R, Mukherjee A, Singh P. CO2 Capture for Industries by Algae. IntechOpen. 2019 May 29.
- Toochi EC. Carbon sequestration: how much can forestry sequester CO2? MedCrave. 2018;2(3):148–150.
- Haoyang C. Algae-Based Carbon Sequestration. IOP Conf. Series: Earth and Environmental Science. 2018 Nov 1. doi:10.1088/1755-1315/120/1/012011
- Climate Science Special Report.
- Nath A, Tiwari P, Rai A, Sundaram S. Evaluation of carbon capture in competent microalgal consortium for enhanced biomass, lipid, and carbohydrate production. 3 Biotech. 2019 Oct 3.
- Ghosh A, Kiran B. Carbon Concentration in Algae: Reducing CO2 From Exhaust Gas. Trends in Biotechnology. 2017 May 3:806–808.
- Kumar A, Kaushal S, Saraf S, Singh J. Microbial bio-fuels: a solution to carbon emissions and energy crisis. Frontiers in Bioscience. 2018 Jun 1:1789–1802.
- Moreira D, Pires JCM. Atmospheric CO2 capture by algae: Negative carbon dioxide emission path. Bioresource Technology. 2016 Oct 10:371–379.
- Wells ML, Trainer VL, Smayda TJ, Karlson BSO, Trick CG. Harmful algal blooms and climate change: Learning from the past and present to forecast the future. Harmful Algae. 2015;49:68–93.
- Grattan L, Holobaugh S, Morris J. Harmful algal blooms and public health. Harmful Algae. 2016;57:2–8.
- Calahan D, Osenbaugh E, Adey W. Expanded algal cultivation can reverse key planetary boundary transgressions. Heliyon. 2018;4(2).
- Adeniyi O, Azimov U, Burluka A. Algae biofuel: Current status and future applications. Renewable and Sustainable Energy Reviews. 2018;90:316–335.
- Zhu C, Zhang R, Chen L, Chi Z. A recycling culture of Neochloris oleoabundans in a bicarbonate-based integrated carbon capture and algae production system with harvesting by auto-flocculation. Biotechnology for Biofuels. 2018 Jul 24.
- Richardson JW, Johnson MD, Zhang X, Zemke P, Chen W, Hu Q. A financial assessment of two alternative cultivation systems and their contributions to algae biofuel economic viability. Algal Research. 2014;4:96–104.
Where the Bison Roam and the Dung Beetles Roll: How American Bison, Dung Beetles, and Prescribed Fires are Bringing Grasslands Back
By John Liu, Wildlife, Fish, and Conservation Biology ‘21
Author’s Note: In this article, I will explore the overwhelming impact that the teeny tiny dung beetles have on American grasslands. Dung beetles, along with reintroduced bison and prescribed fires, are stomping, rolling, and burning through the landscape; all in efforts to revive destroyed grassland habitats. Barber et. al. looks at how the beetles are reacting to the bison herds and prescribed fires. Seemingly unrelated factors interact with each other closely, producing results that bring hope to one of the most threatened habitats.
Watching grass grow: its lit
Grasslands are quiet from afar, often characterized by windblown tallgrasses and peaking prairie dogs. But in fact, they are dynamic. Historically, grasslands were constantly changing: fires ripping through the landscape, bison stampedes kicking up dust, and grasses changing colors by the season [2]. However, climate change, increasing human populations, and agricultural conversions all contribute to an increasing loss of critical habitats; grasslands being amongst the most affected [7]. A loss of grasslands not only results in the extermination of previously residing fauna, but also a reduction of ecosystem services that they once provided. Thus, it is of increasing concern to restore grassland habitats. With the help of bison, dung beetles, and prescribed fires, recovery of grasslands is promising and likely swift.
Eat, poop, burn, repeat
As previously mentioned, grasslands thrive when continuously disturbed. The constant disturbance keeps woody vegetation from encroaching, nonnative plants from invading, and biodiversity from declining as a result of competitive exclusion between species [12]. To accomplish this, grasslands rely on large herbivore grazers such as American bison (Bison bison) to rip through the vegetation and fires to clear large areas of dry debris [9].
American bison are herbivore grazers- animals that feed on plant matter near the ground. The presence of these grazers alter available plant biomass, vegetation community structures, and soil conditions. This is the result of constant trampling, consuming, and digesting of the plant matter [9, 11]. Due to their valuable impact on the landscape, bison are considered keystone species- species that have an overwhelming, essential role in the success of an ecosystem [8]. Grasslands would look vastly different without bison walking, eating, and defecating on them [9].
But bison do not aimlessly roam the grasslands, eating anything they come across. They specifically target areas that have been recently burned. These scorched areas present themselves with new growth, higher in nutritional content [3, 5]. Historically, lightning strikes or intense summer heats caused these fires, driving the movement of grazers, but human intervention inhibits these natural occurrences. Instead, prescribed fires- planned, controlled burnings performed by humans- now mitigate the loss of natural fires, encouraging the bison’s selective foraging behaviors [4, 12]. Inciting bison to follow burned patches benefits the grasslands in more ways than one. First, this prevents overgrazing of any one particular area. By moving throughout the landscape, particular areas will reestablish while others are cleared by the bison. Second, the simple act of traversing large distances physically changes the landscape. Bison are large animals that travel in herds. When moving about the grasslands, they trample vegetation and compact the soil beneath their hoofs. Finally, grazing bison interrupt the process of competitive exclusion- limiting success as a result of competition for resources- amongst native plants. They indiscriminately consume vegetation in these areas, leaving little room for any one species of plant to out compete another [9].
The world is your toilet…. with dung beetles!
What goes in must come out, and bison are no exception to that rule. After digestion of the grasses they eat, bison leave behind a trail of dung and urine. The nitrogen rich waste feeds back into the ecosystem, offering valuable nutrients to the plants and soil-dwelling organisms alike [1]. But a recent study by Barber et. al. highlights a small, but critical component that ensures nutrient distribution is maximized in grasslands: the dung beetles (Scarabaeidae: Scarabaeinae and Aphodiinae, and Geotrupidae).
Dung beetles rely on the solid waste from their mammalian partners. The beetles eat, distribute, and even bury the dung; which helps with carbon sequestration [10]. They are found around the world- from the rainforests of Borneo to the grasslands of North America- and interact with each environment differently. In Borneo, dung beetles distribute seeds found in the waste of fruit loving Howler monkeys (Alouatta spps) [6]. While in North America, they spread nutrients found in the waste of grazing bison. They provide unique ecosystem functions- shattering of nutrient rich dung throughout vast landscapes. These attributes led to their increasing popularity in science research as a study taxon in recent years.
Figure 1: Grassland health is largely dependent on the interplay of multiple living and non-living elements. In 1.1, the area is dominated by woody vegetation and few grasses due to a lack of disturbance. In 1.2, the introduction of prescribed fires clears some woody vegetation, allowing grasses to compete. In 1.3, bison introduce nutrients into the landscape, increasing productivity. However, the distribution of dung is limited. In 1.4, the addition of dung beetles lead to better distribution of nutrients thus more productivity and species diversity.
Barber et. al. took a closer look to see how exactly dung beetles were reacting to bison grazing and prescribed fires blazing through their grassy fields. They found significant contributions from each; both noticeably directing the movement and influencing the abundance of these beetles. As the bison followed the flames, so did the beetles. The beetles’ dependence on the bison’s dung showed when researchers looked at beetle abundance in two key areas: those with bison and those without. There were significantly more beetles in areas with bison, likely feeding on their dung, scattering it, and burying it; all while simultaneously feeding the landscape. Prescribed fires also lead to increases in beetle abundance. Whether it be 1.5 years post-restoration or 30 years post-restoration, researchers consistently saw increases in beetle abundance when prescribed fires were performed. This further amplifies the importance of disturbances in grassland habitats, for ecosystem health but also for species richness.
And the grass keeps growing
The reintroduction of bison in the grasslands of America proved successful in rebuilding a lost habitat, with the help of dung beetles and prescribed fires. However, bison and dung beetles are just one of many examples of unlikely pairings rebuilding lost habitats. Although the large-scale ecological processes have been widely studied, species-to-species interactions are often overlooked. Continued surveys of the grasslands will reveal more about the interactions of contributing factors and their effects on each other and the habitat around them.
Citations
- Barber, Nicholas A., et al. “Initial Responses of Dung Beetle Communities to Bison Reintroduction in Restored and Remnant Tallgrass Prairie.” Natural Areas Journal, vol. 39, no. 4, 2019, p. 420., doi:10.3375/043.039.0405.
- Collins, Scott L., and Linda L. Wallace. Fire in North American Tallgrass Prairies. University of Oklahoma Press, 1990.
- Coppedge, B.R., and J.H. Shaw. 1998. Bison grazing patterns on seasonally burned tallgrass prairie. Journal of Range Management 51:258-264.
- Fuhlendorf, S.D., and D.M. Engle. 2004. Application of the fire–grazing interaction to restore a shifting mosaic on tallgrass prairie. Journal of Applied Ecology 41:604-614.
- Fuhlendorf, S.D., D.M. Engle, J.A.Y. Kerby, and R. Hamilton. 2009. Pyric herbivory: Rewilding landscapes through the recoupling of fire and grazing. Conservation Biology 23:588-598.
- Genes, L. , Fernandez, F. A., Vaz‐de‐Mello, F. Z., da Rosa, P. , Fernandez, E. and Pires, A. S. (2018), Effects of howler monkey reintroduction on ecological interactions and processes. Conservation Biology. . doi:10.1111/cobi.13188
- Gibson, D.J. 2009. Grasses and Grassland Ecology. Oxford University Press, Oxford, UK.
- Khanina, Larisa. “Determining Keystone Species.” Ecology and Society, The Resilience Alliance, 15 Dec. 1998, www.ecologyandsociety.org/vol2/iss2/resp2/.
- Knapp, Alan K., et al. “The Keystone Role of Bison in North American Tallgrass Prairie: Bison Increase Habitat Heterogeneity and Alter a Broad Array of Plant, Community, and Ecosystem Processes.” BioScience, vol. 49, no. 1, 1999,
- Menendez, R., P. Webb, and K.H. Orwin. 2016. Complementarity of ´ dung beetle species with different functional behaviours influence dung–soil carbon cycling. Soil Biology and Biochemistry 92:142-148
- Mcmillan, Brock R., et al. “Vegetation Responses to an Animal-Generated Disturbance (Bison Wallows) in Tallgrass Prairie.” The American Midland Naturalist, vol. 165, no. 1, 2011, pp. 60–73., doi:10.1674/0003-0031-165.1.60.
- Packard, S., and C.F. Mutel. 2005. The Tallgrass Restoration Handbook: For Prairies, Savannas, and Woodlands. Island Press, Washington, DC.
- Raine, Elizabeth H., and Eleanor M. Slade. “Dung Beetle–Mammal Associations: Methods, Research Trends and Future Directions.” Proceedings of the Royal Society B: Biological Sciences, vol. 286, no. 1897, 2019, p. 20182002., doi:10.1098/rspb.2018.2002.
Pharmacogenomics in Personalized Medicine: How Medicine Can Be Tailored To Your Genes
By: Anushka Gupta, Genetics and Genomics, ‘20
Author’s Note: Modern medicine relies on technologies that have barely changed over the past 50 years, despite all of the research that has been conducted on new drugs and therapies. Although medications save millions of lives every year, any one of these might not work for one person even if it works for someone else. With this paper, I hope to shed light on this new rising field and the lasting effects it can have on the human population.
Future of Modern Medicine
Take the following scenario: You’re experiencing a persistent cough, a loss of appetite, and unexplained weight loss to only then find an egg-like swelling under your arm. Today, a doctor would determine your diagnosis by taking a biopsy of your arm and analyzing the cells using the microscope, a 400-year-old technology. You have non-Hodgkins lymphoma. Today’s treatment plan for this condition is a generic one-size-fits-all chemotherapy with some combination of alkylating agents, anti-metabolites, and corticosteroids (just to name a few) that would be injected intravenously to target fast-dividing cells that can harm both cancer cells and healthy cells [1]. This approach may be effective, but if it doesn’t work, your doctor tells you not to despair – there are some other possible drug combinations that might be able to save you.
Flash forward to the future. Your doctor will now instead scan your arm with a DNA array, a computer chip-like device that can register the activity patterns of thousands of different genes in your cells. It will then tell you that your case of lymphoma is actually one of six distinguishable types of T-cell cancer, each of which is known to respond best to different drugs. Your doctor will then use a SNP chip to flag medicines that won’t work in your case since your liver enzymes break them down too fast.
Tailoring Treatment to the Individual
The latter case is one that we all wish to encounter if we were in this scenario. Luckily, this may be the case one day with the implementation of pharmacogenomics in personalized medicine. This new field takes advantage of the fact that new medications typically require extensive trials and testing to ensure its safety, thus holding the potential as a new solution to bypass the traditional testing process of pharmaceuticals.
Even though only the average response is reported, if the drug is shown to have adverse side effects to any fraction of the population, the drug is immediately rejected. “Many drugs fail in clinical trials because they turn out to be toxic to just 1% or 2% of the population,” says Mark Levin, CEO of Millennium Pharmaceuticals [2]. With genotyping, drug companies will be able to identify specific gene variants underlying severe side effects, allowing the occasional toxic reports to be accepted, as gene tests will determine who should and shouldn’t get them. Such pharmacogenomic advances will more than double the FDA approval rate of drugs that can reach the clinic. In the past, fast-tracking was only reserved for medications that were to treat untreatable illnesses. However, pharmacogenomics allows for medications to undergo an expedited process, regardless of the severity of the disease. There would be fewer guidelines to follow because the entire population would not need to produce a desirable outcome. As long as the cause of the adverse reaction can be attributed to a specific genetic variant, the drug will be approved by the FDA [3].
Certain treatments already exist using this current model, such as for those who are afflicted with a certain genetic variant of cystic fibrosis. Additionally, this will contribute to reducing the number of yearly cases of adverse drug reactions. As with any field, pharmacogenomics is still a rising field and is not without its challenges, but new research is still being conducted to test its viability.
With pharmacogenomic informed personalized medicine, individualized treatment can be designed according to one’s genomic profile to predict the clinical outcome of different treatments in different patients [4]. Normally, drugs would be tested on a large population, where the average response would be reported. While this method of medicine relies on the law of averages, personalized medicine, on the other hand, recognizes that no two patients are alike [5].
Genetic Variants
By doubling the approval rate, there will be a larger variety of drugs available to patients with unique circumstances where the generic treatment fails. In pharmacogenomics, genomic information is used to study individual responses to drugs. Experiments can be designed to determine the correlation between particular gene variants with exact drug responses. Specifically, modern approaches, including multigene analysis or whole-genome single nucleotide polymorphism (SNP) profiles, will assist in clinical trials for drug discovery and development [5]. SNPs are especially useful as they are genetically unique to each individual and are responsible for many variable characteristics, such as appearance and personality. A strong grasp of SNPs is fundamental to understand why an individual may have a specific reaction to a drug. Furthermore, SNPs can also be applied so that these genetic markers can be mapped to certain drug responses.
Research regarding specific genetic variants and their association with a varying drug response will be fundamental in prescribing a drug to a patient. The design and implementation of personalized medical therapy will not only improve the outcome of treatments but also reduce the risk of toxicity and other adverse effects. A better understanding of individual variations and their effect on drug response, metabolism excretion, and toxicity has the potential to replace the trial-and-error approach of treatment. Evidence of the clinical utility of pharmacogenetic testing is only available for a few medications, and the Food and Drug Administration (FDA) labels only require pharmacogenetics testing for a small number of drugs [6].
Cystic Fibrosis: Case Study
While this concept may seem far-fetched, a few select treatments have been approved by the FDA for certain populations, as this field of study promotes the development of targeted therapies. For example, the drug Ivacaftor was approved for patients with cystic fibrosis (CF), a genetic disease that causes persistent lung infections and limits the ability to breathe. Those diagnosed with CF have a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene, rendering the resulting CFTR protein defective. This protein is responsible for moving chloride to the cell surface, attracting water that will then generate mucus. However, those with the mutation have thick and sticky mucus, leaving the patient susceptible to germs and other infections as the bacteria that would normally be cleared [7]. Ivacaftor is only approved for CF patients who bear the specific G551D genetic variant, a specific mutation in the CFTR gene. This drug can then target the CFTR protein, increase its activity, and consequently improve lung function [8]. It’s important to note that the G551D is only just one genetic variant out of 1,700 currently known mutations that can cause CF.
Adverse Drug Reactions
Pharmacogenomics also addresses the unknown adverse effects of drugs, especially for medications that are taken too often or too long. These adverse drug reactions (ADRs) are estimated to cost $136 billion annually. Additionally, within the United States itself, serious side effects from pharmaceutical drugs occur in 2 million people each year and may cause as many as 100,000 deaths, making it the fourth most common cause of death according to the FDA [9].
The mysterious and unpredictable side effects of various drugs have been chalked up to individual variation encoded in the genome and not drug dosage. Genetics also determines hypersensitivity reactions in patients who may be allergic to certain drugs. In these cases, the body will initiate a rapid and aggressive immune response that can hinder breathing and may even lead to a cardiovascular collapse [5]. This is just one of the countless cases where unknown patient hypersensitivity to drugs can lead to extreme outcomes. However, some new research in pharmacogenomics has shown that 80% of the variability in drugs can be reduced. The implications of this new research could mean that a significant amount of these ADRs could be significantly decreased inpatient management, leading to better outcomes [11].
Challenges
Pharmacogenomic informed medicine may suggest the ultimate demise of the traditional model of drug development, but the concept of targeted therapy is still in its early stages. One reason that this may be the case is due to the fact that most pharmacogenetic traits involve more than one gene, making it even more difficult to understand or even predict the different variations of a complex phenotype like a drug response. Through genome-wide approaches, there is evidence of drugs having multiple targets and numerous off-target results [4].
Even though this is a promising field, there are challenges that must be overcome. There is a large gap between integrating the primary care workforce with genomic information for various diseases and conditions as many healthcare workers are not prepared to integrate genomics into their daily practice. Medical school curriculums would need to be updated in order to implement information and knowledge regarding pharmacogenomics incorporated personalized medicine. This would also create a barrier in presenting this new research to broader audiences including medical personnel due to the complexity of the field and its inherently interdisciplinary nature [12].
Conclusion
The field has made important strides over the past decade, but clinical trials are still needed to not only identify the various links between genes and treatment outcome, but also to clarify the meaning of these associations and translate them into prescribing guidelines [4]. Despite its potential, there are not many examples where pharmacogenomics impacts clinical utility, especially since many genetic variants have not been studied yet. Nonetheless, progress in the field gives us a glimpse of a time where pharmacogenomics and personalized medicine will be a part of regular patient care.
Sources
- “Chemotherapy for Non-Hodgkin Lymphoma.” American Cancer Society, www.cancer.org/cancer/non-hodgkin-lymphoma/treating/chemotherapy.html.
- Greek, Jean Swingle., and C. Ray. Greek. What Will We Do If We Don’t Experiment on Animals?: Medical Research for the Twenty-First Century. Trafford, 2004, Google Books, books.google.com/books?id=mB3t1MTpZLUC&pg=PA153&lpg=PA153&dq=mark+levin+drugs+fail+in+clinical+trials&source=bl&ots=ugdZPtcAFU&sig=ACfU3U12d-BQF1v67T3WCK8-J4SZS9aMPg&hl=en&sa=X&ved=2ahUKEwjVn6KfypboAhUDM6wKHWw1BrQQ6AEwBXoECAkQAQ#v=onepage&q=mark%20levin%20drugs%20fail%20in%20clinical%20trials&f=false.
- Chary, Krishnan Vengadaraga. “Expedited Drug Review Process: Fast, but Flawed.” Journal of Pharmacology & Pharmacotherapeutics, Medknow Publications & Media Pvt Ltd, 2016, www.ncbi.nlm.nih.gov/pmc/articles/PMC4936080/.
- Schwab, M., Schaeffeler, E. Pharmacogenomics: a key component of personalized therapy. Genome Med 4, 93 (2012). https://doi.org/10.1186/gm394
- Adams, J. (2008) Pharmacogenomics and personalized medicine. Nature Education 1(1):194
- Singh D.B. (2019) The Impact of Pharmacogenomics in Personalized Medicine. In: Silva A., Moreira J., Lobo J., Almeida H. (eds) Current Applications of Pharmaceutical Biotechnology. Advances in Biochemical Engineering/Biotechnology, vol 171. Springer, Cham
- “About Cystic Fibrosis.” CF Foundation, www.cff.org/What-is-CF/About-Cystic-Fibrosis/.
- Eckford PD, Li C, Ramjeesingh M, Bear CE: CFTR potentiator VX-770 (ivacaftor) opens the defective channel gate of mutant CFTR in a phosphorylation-dependent but ATP-independent manner. J Biol Chem. 2012, 287: 36639-36649. 10.1074/jbc.M112.393637.
- Pirmohamed, Munir, and B.kevin Park. “Genetic Susceptibility to Adverse Drug Reactions.” Trends in Pharmacological Sciences, vol. 22, no. 6, 2001, pp. 298–305., doi:10.1016/s0165-6147(00)01717-x.
- Adams, J. (2008) Pharmacogenomics and personalized medicine. Nature Education 1(1):194
- Cacabelos, Ramón, et al. “The Role of Pharmacogenomics in Adverse Drug Reactions.” Expert Review of Clinical Pharmacology, U.S. National Library of Medicine, May 2019, www.ncbi.nlm.nih.gov/pubmed/30916581.
- Roden, Dan M, et al. “Pharmacogenomics: Challenges and Opportunities.” Annals of Internal Medicine, U.S. National Library of Medicine, 21 Nov. 2006, www.ncbi.nlm.nih.gov/pmc/articles/PMC5006954/#idm140518217413328title.
The Role of Dendritic Spine Density in Neuropsychiatric and Learning Disorders
Photo originally by MethoxyRoxy on Wikimedia Commons. No changes. CC License BY-SA 2.5.
By Neha Madugala, Cognitive Science, ‘21
Author’s Note: Last quarter I took Neurobiology (NPB100) with Karen Zito, a professor at UC Davis. I was interested in her research in dendritic spines and its correlation to my personal area of interest in research regarding the language and cognitive deficiencies present in different populations such as individuals with schizophrenia. There seems to be a correlational link between the generation and quantity of dendritic spines and the presence of different neurological disorders. Given the dynamic nature of dendritic spines, current research is studying their exact role and the potential to manipulate these spines in order to impact learning and memory.
Introduction
Dendritic spines are small bulbous protrusions that line the sides of dendrites on a neuron [12]. Dendritic spines serve as a major site of synapses for excitatory neurons, which continue signal propagation in the brain. Relatively little is known about the exact purpose and role of dendritic spines, but as of now, there seems to be a correlation between the concentration of dendritic spines and the presence of different disorders, such as autism spectrum disorders (ASD), schizophrenia, and Alzheimer’s disease. Scientists hypothesize that dendritic spines are a key player in the pathogenesis of various neuropsychiatric disorders [8]. It should be noted that other morphological changes are also observed when comparing individuals with the mentioned neuropsychiatric disorders are compared to neurotypical individuals. However, all these disorders share the common thread of abnormal dendritic spine density.
The main disorders studied in relation to dendritic spine density are autism spectrum disorder (ASD), schizophrenia, and Alzheimer’s disease. Current studies suggest that these disorders result in the number of dendritic spines straying from what is observed in a neurotypical individual. It should be noted that there is a general decline in dendritic spines as an individual ages. However intellectual disabilities and neuropsychiatric disorders seem to alter this density at a more extreme rate. The graph demonstrates the general trend of dendritic spine density for various disorders; however, these trends may slightly vary across individuals with the same disorder.
Dendritic Spines
I. Role of Dendritic Spines
Dendritic spines are protrusions found on certain types of neurons throughout the brain, such as in the cerebellum and cerebral cortex. They were first identified by Ramon y Cajal, who classified them as “thorns or short spines” located nonuniformly along the dendrite [6].
The entire human cerebral cortex consists of 1014 dendritic spines. A single dendrite can contain several hundred spines [12]. There is an overall greater density of dendritic spines on peripheral dendrites versus proximal dendrites and the cell body [3]. Their main role is to assist in synapse formation on dendrites.
Dendritic Spines fall into two categories: persistent and transient spines. Persistent spines are considered ‘memory’ spines, while transient spines are considered ‘learning’ spines. Transient spines are categorized as spines that exist for four days or less and persistent spines as spines that exist for eight days or longer [5].
The dense concentration of spines on dendrites is crucial to the fundamental nature of dendrites. At an excitatory synaptic cleft, the release of the neurotransmitter at excitatory receptors on the postsynaptic cell results in an excitatory postsynaptic potential (EPSP), which causes the cell to fire an action potential. An action potential is where a signal is transmitted from one neuron to another neuron. In order for a neuron to propagate an action potential, there must be an accumulation of positive charge at the synapses, reaching a certain threshold (Figure 2). The cell must reach a certain level of depolarization – a difference in charge across the neuron’s membrane making the inside more positive. A single EPSP may not result in enough depolarization to reach this action potential threshold. As a result, the presence of multiple dendritic spines on the dendrite allows for multiple synapses to be formed and multiple EPSPs to be summated. With the summation of various EPSPs on the dendrites of the neurons, the cell can reach the action potential threshold. The greater density of dendritic spines along the postsynaptic cell allows for more synaptic connections to be formed, increasing the chance of an action potential to occur.
Figure 2. Firing of Action Potential (EPSP)
- Neurotransmitter is released by the presynaptic cell into the synaptic cleft.
- For an EPSP, an excitatory neurotransmitter will be released, which will bind to receptors on the postsynaptic cell.
- The binding of these excitatory neurotransmitters will result in sodium channels opening, allowing sodium to go down its electrical and chemical gradient – depolarizing the cell.
- The EPSPs will be summated at the axon hillock and trigger an action potential.
- This actional potential will cause the firing cell to release a neurotransmitter at its axon terminal, further conveying the electrical signal to other neurons.
II. Creation
Dendrites initially are formed without spines. As development progresses, the plasma membrane of the dendrite forms protrusions called filopodia. These filopodia then form synapses with axons, and eventually transition from filopodia to dendritic spines [6].
The reason behind the creation of dendritic spines is currently unknown. There are a few potential hypotheses. The first hypothesis suggests that the presence of dendritic spines can increase the packing density of synapses, allowing for more potential synapses to be formed. The second hypothesis suggests that their presence can help prevent excitotoxicity, overexcitation of the excitatory receptors (NMDA and AMPA receptors) present on the dendrites. These receptors usually bind with glutamate, a typically excitatory neurotransmitter, released from the presynaptic cell. This can result in damage to the neuron or if more severe, neuronal death. Since dendritic spines compartmentalize charge [3], this feature helps prevent the dendrite from being over-excited beyond the threshold potential for an action potential. Lastly, another hypothesis suggests that the large variation in dendritic spine morphology suggests that these different shapes play a role in modulating how postsynaptic potentials can be processed by the dendrite based on the function of the signal.
The creation of these dendritic spines is rapid during early development, slowly tapering off as the individual gets older. This process is mostly replaced with the pruning of synapses formed with dendritic spines when the individual is older. Pruning helps improve the signal-to-noise ratio of signals sent within neuronal circuits [3]. The signal-to-noise ratio outlines the ratio of signals sent by neurons and signals actually received by postsynaptic cells. It determines the efficiency of signal transmission. Experimentation has shown that the presence of glutamate and excitatory receptors (such as NMDA and AMPA) can result in the formation of dendritic spines within seconds [3]. The introduction of NMDA and AMPA results in cleavage of intracellular adhesion molecule-5 (ICAM5) from hippocampal neurons. ICAM5 is a “neuronal adhesion molecule that regulates dendritic elongation and spine maturation. [11]” Furthermore, through a combination of fluorescent dye and confocal or two-photon laser scanning microscopy, scientists were able to use imaging technology to witness that spines can undergo minor changes within seconds and more drastic conformational changes, even disappearing over minutes to hours [12].
III. Morphology
The spine head’s morphology, a large bulbous head connected to a very thin neck that attaches to the dendrite, assists in its role as a postsynaptic cell. This shape allows one synapse at a dendritic spine to be activated and strengthened without influencing neighboring synapses [12].
Dendritic spine shape is extremely dynamic, allowing one spine to slightly alter its morphology throughout its lifetime [5]. However, dendritic spine morphology seems to take on a predominant form that is determined by the brain region of its location. For instance, presynaptic neurons from the thalamus take on the mushroom shape, whereas the lateral nucleus of the amygdala have thin spines on their dendrites [2]. The type of neuron and brain region the spine originates from seem to be correlated to the observed morphology.
The spine contains a postsynaptic density, which consists of neurotransmitter receptors, ion channels, scaffolding proteins, and signaling molecules [12]. In addition to this, the spine has smooth endoplasmic reticulum, which forms stacks called spine apparatus. It further has polyribosomes, hypothesized to be the site of local protein synthesis in these spines, and an actin-based cytoskeleton for structure [12]. The actin-based cytoskeleton makes up for the lack of microtubules and intermediate filaments, which play a crucial role in the structure and transport of most of our animal cells. Furthermore, these spines are capable of compartmentalizing calcium, the ion used at neural synapses that signal the presynaptic cell to release its neurotransmitter into the synaptic cleft [12]. Calcium plays a crucial role in second messenger cascades, influencing neural plasticity [6]. It also plays a role in actin polymerization, which allows for the motile nature of spine morphology [6].
There are many various shapes for dendritic spines. The common types are ‘stubby’ (short and thick spines with no neck), ‘thin’ (small head and thin neck), ‘mushroom’ (large head with a constricted neck), and ‘branched’ (two heads branching from the same neck) [12].
IV. Learning and Memory
Dendritic spines play a crucial role in memory and learning through occurrence of long-term potentiation (LTP), which is thought to be the cellular level of learning and memory. LTP is thought to induce spine formation, which hints at the common correlation that learning is associated with the formation of dendritic spines. Furthermore, LTP is thought to be capable of altering the immature and mature hippocampus, commonly associated with memory [2]. To contrast LTP, long-term depression (LTD) essentially works opposite to LTP – decreasing the dendritic spine density and size [2].
The correlation between dendritic spines and learning is relatively unknown. There seems to be a general trend suggesting that the creation of these spines is associated with learning. However, it is unclear whether learning results in the formation of these spines or if the formation of these spines results in learning. The general idea behind this hypothesis is that dendritic spines aid in the formation of synapses, allowing the brain to form more connections. As a result, a decline in these dendritic spines in neuropsychiatric disorders, such as schizophrenia, can inhibit an individual’s ability to learn. This is observed in various cognitive and linguistic deficiencies observed in individuals with schizophrenia.
Memory is associated with the strengthening and weakening of connections due to LTP and LTD, respectively. The alteration of these spines through LTP and LTD is called activity-dependent plasticity [6]. The main morphological shapes associated with memory are the mushroom spine, a large head with a constricted neck, and the stubby spine, a short and thick spine with no neck [6]. Both of these spines are relatively large, resulting in more stable and enduring connections. These bigger and heavier spines associated with learning are a result of LTP. By contrast, transient spines (live four days or shorter) are usually smaller and more immature in morphology and function, resulting in more temporary and less stable connections.
LTP and LTD play a crucial role in modifying dendritic spine morphology. Neuropsychiatric disorders can alter these mechanisms resulting in abnormal density and size of these spines.
Schizophrenia
I. What is Schizophrenia?
Schizophrenia is a mental disorder that results in disordered thinking and behaviors, hallucinations, and delusions [9]. The exact mechanics of schizophrenia are still being studied as researchers are trying to determine the underlying biological reasons behind this disorder and a way to help these individuals. Current treatment is focused on reducing and in some cases treating symptoms of this disorder, but more research and understanding is required to fully treat this mental disorder.
II. Causation
The exact source of schizophrenia seems to lie somewhere between the presence of certain genes and environmental effects. There seems to be a correlation between traumatic or stressful life events during an individual’s adolescence to an increased susceptibility to developing schizophrenia [1]. While research is still underway, certain studies point to cannabis having a role in increasing susceptibility to schizophrenia or worsening symptoms if an individual already has schizophrenia [1]. There seems to be some form of a genetic correlation, given an increased likelihood of developing schizophrenia if present in a family member. This factor seems to result from a combination of genes; however, no genes have been identified yet. There also seems to be a chemical component, given the variation of chemical composition and density of neurotypical individuals and individuals with schizophrenia. Specifically, researchers have observed an elevated amount of dopamine found in individuals with schizophrenia [1].
III. Relationship between Dendritic Spines and Schizophrenia
A common thread among most schizophrenia patients is an impairment of pyramidal neuron (prominent cell form found in the cerebral cortex) dendritic morphology, occurring in various regions of the cerebral cortex [7]. Observed in postmortem brain tissue studies, there seems to be a reduced density of dendritic spines in the brains of individuals with schizophrenia. These findings are consistent with various regions of the brain that have been studied, such as the frontal and temporal neocortex, the primary visual cortex, and the subiculum within the hippocampal formation [7]. Out of seven studies observing this finding, the median reported decrease in spine density was 23%, with the overall range of these various studies being a decline of 6.5% to 66% [7].
It should be noted that studies were done to see if the decline in spine density was due to the usage of antipsychotic drugs. However animal and human trials showed no significant difference in the dendritic spine density of tested individuals.
This decline in dendritic spine density is hypothesized to be the result of the failure of the brain of schizophrenic individuals to produce sufficient dendritic spines at birth or if there is a more rapid decline of these spines during adolescence, where the onset of schizophrenia is typically observed [7]. The source of this decline is unclear, but seems to be attributed to deficits in pruning, maintenance, or simply the mechanisms of the underlying formation of these dendritic spines [7].
However, there are conflicting results. For instance, Thompson et al. conducted a study that seemed to suggest that a decline in spine density resulted in a progressive decline of gray matter, typically observed in schizophrenic individuals. Thompson et al. conducted an in vivo study of this phenomena. The study used MRI scans for twelve schizophrenic individuals and twelve neurotypical individuals, finding a progressive decline in gray matter – starting in the parietal lobe and expanding out to motor, temporal, and prefrontal areas [10]. The study suggests that the main attribution for this is a decline in dendritic spine density with the progression of the disorder. This study coincides with the previously mentioned hypothesis of a decline of spines during adolescence.
It is also possible that there is a combination of both of these factors occurring. Most studies have only been able to observe postmortem brain tissue, creating the confusion of whether there is a decline in spines or if the spines are simply not produced in the first place. The lack of in vivo studies makes it difficult to find a concrete trend within data.
Conclusion
While research is still ongoing, current evidence seems to suggest that dendritic spines are a crucial aspect in learning and memory. Their role in these crucial functions has been reflected by their absence in various neuropsychiatric disorders – such as schizophrenia, certain learning deficits present in some individuals with ASD, and memory deficits present in Alzheimer’s disease. These deficits seem to occur during the early creation of neural networks in the brain at synapses. Further research understanding the development of these spines and the creation of different morphological forms can be crucial in determining how to potentially cure or treat these deficiencies present in neuropsychiatric and learning disorders.
References
- NHS Choices, NHS, www.nhs.uk/conditions/schizophrenia/causes/.
- Bourne, Jennifer N, and Kristen M Harris. “Balancing Structure and Function at Hippocampal Dendritic Spines.” Annual Review of Neuroscience, U.S. National Library of Medicine, 2008, www.ncbi.nlm.nih.gov/pmc/articles/PMC2561948/.
- “Dendritic Spines: Spectrum: Autism Research News.” Spectrum, www.spectrumnews.org/wiki/dendritic-spines/.
- Hofer, Sonja B., and Tobias Bonhoeffer. “Dendritic Spines: The Stuff That Memories Are Made Of?” Current Biology, vol. 20, no. 4, 2010, doi:10.1016/j.cub.2009.12.040.
- Holtmaat, Anthony J.G.D., et al. “Transient and Persistent Dendritic Spines in the Neocortex In Vivo.” Neuron, Cell Press, 19 Jan. 2005, www.sciencedirect.com/science/article/pii/S0896627305000048.
- McCann, Ruth F, and David A Ross. “A Fragile Balance: Dendritic Spines, Learning, and Memory.” Biological Psychiatry, U.S. National Library of Medicine, 15 July 2017, www.ncbi.nlm.nih.gov/pmc/articles/PMC5712843/.
- Moyer, Caitlin E, et al. “Dendritic Spine Alterations in Schizophrenia.” Neuroscience Letters, U.S. National Library of Medicine, 5 Aug. 2015, www.ncbi.nlm.nih.gov/pmc/articles/PMC4454616/.
- Penzes, Peter, et al. “Dendritic Spine Pathology in Neuropsychiatric Disorders.” Nature Neuroscience, U.S. National Library of Medicine, Mar. 2011, www.ncbi.nlm.nih.gov/pmc/articles/PMC3530413/.
- “Schizophrenia.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 7 Jan. 2020, www.mayoclinic.org/diseases-conditions/schizophrenia/symptoms-causes/syc-20354443.
- “Schizophrenia and Dendritic Spines.” Ness Labs, 20 June 2019, nesslabs.com/schizophrenia-dendritic-spines.
- “Synaptic Cleft: Anatomy, Structure, Diseases & Functions.” The Human Memory, 17 Oct. 2019, human-memory.net/synaptic-cleft/.
- Tian, Li, et al. “Activation of NMDA Receptors Promotes Dendritic Spine Development through MMP-Mediated ICAM-5 Cleavage.” The Journal of Cell Biology, Rockefeller University Press|1, 13 Aug. 2007, www.ncbi.nlm.nih.gov/pmc/articles/PMC2064474/.
- Zito, Karen, and Venkatesh N. Murthy. “Dendritic Spines.” Current Biology, vol. 12, no. 1, 2002, doi:10.1016/s0960-9822(01)00636-4.
Einstein’s Fifth Symphony
By Jessie Lau, Biochemistry and Molecular Biology ‘20
Author’s Note: Growing up, playing the piano was a major part of my life— weekdays were filled with hour-long practices while Saturdays were for lessons. My schedule was filled with preparations for board exams and recitals, and in the absence of the black and white keys, my fingers were always tapping away at any surface I could find. My parents always told me learning the piano was good for my education and will put me ahead in school because it will help with my math and critical thinking in the long run. However, I was never able to understand the connection between the ease of reading music and my ability to calculate complex integrals. In this paper, I will extrapolate on the benefits of learning an instrument in cognitive development.
Introduction
What do Albert Einstein, Werner Heisenberg, Max Planck, and Barbara McClintock all have in common? Other than their Nobel-prize winning research in their respective fields, all of these scientists share the love of playing a musical instrument. At an early age, Einstein followed after his mother in playing the violin; Heisenberg learned to read music to play the piano at the young age of four; Planck became gifted in playing the organ and piano; McClintock played the tenor banjo in a jazz band during her time at Cornell University [1]. While these researchers spent time honing in on their musical talent, they were engaging both their central and peripheral nervous systems. Playing an instrument requires the coordination of various parts of the brain working together. The motor system gauges meticulous movements to develop sound, which is then picked up by the auditory circuitry. Simultaneously, sensory information picked up by fingers and hands are delivered to the brain. Furthermore, individuals reading music use visual nerves to send information to the brain. This is then processed and interpreted to generate a response carried out by the extremities. All the while, the sound of music elicits an emotional response from the player.
Feedforward and feedback pathways of the brain are two auditory-motor interactions that are elicited while playing an instrument. Feedforward interactions are predictive manners that can influence motor responses. For example, tapping to the rhythm of a beat in anticipation of the upcoming flux and accents in the piece. Alternatively, feedback interactions are particularly important for stringed instruments such as the violin, where pitch changes and requires continuous management [12]. As shown in Figure 1, the musician must auditorily perceive each note and respond with suitably timed motor changes. All of these neurophysiological components raise questions as to how musical training can confer brain development. Longitudinal studies find that musical training can have an expansive benefit in the development of linguistics, executive function, general IQ and academic achievement [2].
Linguistic Skills
Music shares the same dorsal auditory pathway and processing center in the brain with all other sounds. This passageway is anatomically linked by the arcuate fasciculus, thus suggesting instrumental training will translate to the manifestation of language related skills. This unilateral pathway is central to an array of language related skills, including language development, second-language acquisition, and verbal memory [2]. According to Vaquero et al, “Playing an instrument or speaking multiple languages involve mapping sounds to motor commands, recruiting auditory and motor regions such as the superior temporal gyrus, inferior parietal, inferior frontal and premotor areas, that are organized in the auditory dorsal stream” [10].
Researchers studying the effects of acoustic sounds mimicking stop consonant speech on language development find that children learning instruments during the critical developmental period (0-6 years old), build lasting structural and organizational modification in their auditory system that will later affect language skills. Stop consonants include voiceless sounds /p/, /t/, and /k/, as well as voiced sounds /b/, /d/, and /g/. Dr. Strait and her colleagues describe their observations, “Given relationships between subcortical speech-sound distinctions and critical language and reading skills, music training may offer an efficient means of improving auditory processing in young children” [11].
Similarly, Dr. Patel suggests the existence of an overlap of shared brain connections amongst speech and music due to the requirement for accuracy while playing an instrument. Refining this finesse demands attentional training combined with self-motivation and determination. Repeated stimulation of these brain networks garner “emotional reinforcement potential” which is key to “… good performance of musicians in speech processing” [3].
Beyond stimulating auditory neurons, instrumental training has been shown to improve verbal memory. For example, in a comparative analysis, researchers find children who have undergone musical training demonstrate verbal memory advantages compared to their peers without training [4]. Furthermore, following up a year after the initial study, they found that continued practice led to substantial advancement in verbal memory, while those who discontinued failed to show any improvement. This finding is supported by Jakobson et al in which researchers correlate, “… enhanced verbal memory performance in musicians is a byproduct of the effect of music instruction on the development of auditory temporal-order processing abilities” [5].
In the case of acquiring a second language [abbreviated L2], a study conducted on 50 Japanese adults learning English finds, “… the ability to analyze musical sound structure would also likely facilitate the analysis of a novel phonological structure of an L2” [6]. These researchers further elaborate on the potential of improving English syntax with musical exercises concentrating on syntactic processes, such as “… hierarchical relationships between harmonic or melodic musical elements” [6]. Multiple studies have also identified music training to invoke specific structures in the brain also employed during language processing, including Heschl’s gyrus and Broca’s and Wernicke’s areas [2].
While music and language elements are stored in different regions of the brain, the common auditory pathway gives way to instrumental training strengthening linguistic development in multiple areas.
Executive Function
Executive function is the application of the prefrontal cortex to carry out tasks requiring conscious effort to attain a goal, particularly in novel scenarios [7]. This umbrella term includes cognitive control in attention and inhibition, working memory, and the ability to task switch. Psychologists Dr. Hannon and Dr. Trainor find formal musical education to be a direct implementation of, “… domain-specific effects on the neural encoding of musical structure, enhancing musical performance, music reading and explicit knowledge of the musical structure” [8]. The combination of domain-general development and executive functioning can influence linguistics, in addition to mathematical development. Whilst learning an instrument, musicians are required to actively read music notes considered to be a unique language and focus on the ability to translate their visual findings into precise mechanical maneuvers, demanding careful focus. All the while, lending attention to identify and remedy errors in harmony, tone, beat, and fingering. Furthermore, becoming well-trained requires scheduled rehearsals to build a foundational framework for operating the instrument while learning new technical elements and building robust spatial awareness. Thus, this explicit practice of executive function during these scheduled practice sessions are essential in sculpting this region of the prefrontal cortex.
General IQ and Academic Performance
While listening to music has been found to also confer academic advantage, the active practice of deliberately playing music in an ordered process grants musically apt individuals scholastic benefits absent in their counterparts. In a study conducted by Schellenberg, 144 six year-olds were assigned into one of four groups– music groups included keyboard or voice lessons while control groups provided drama classes or no lesson at all. After 36 weeks, using the Wechsler Intelligence Scale for Children–Third Edition (WISC-III),composed of varying assessments to evaluate intelligence, data supports all four groups having a significant increase in IQ. However, this can also be attributed to the start of grade school. Despite the general rise in IQ, individuals that received keyboard or voice lessons proved a greater jump in IQ. Schellenberg discusses this finding of musically-trained six year-olds demonstrating elevated IQ scores to mirror that of school attendance. He reasons participation in school increases one’s IQ and the smaller the learning setting is, the more academic success the student will be able to achieve. Similarly, music lessons are often taught individually or in small groups which mirrors a school structure, thus ensuing IQ boosts.
Possible Confounding Factors
While these studies have found a positive correlation amongst individuals learning musical instruments with varying brain maturation skills, these researchers mark the importance in taking into consideration confounding factors that often cannot be controlled for. These include socioeconomic level, prior IQ, education, and other activities participants are associated with. While many of these researchers worked to gather subjects with similarities in these domains, all of these external elements can play an essential role during one’s developmental period. Moreover, formally learning an instrument is often a financially hefty extracurricular activity, thus more affluent families with higher educational backgrounds can typically afford these programs for their children.
Furthermore, each study implemented varying practice times, music training durations, and instruments participants were required to learn. The data gathered from these findings can elicit results that may not be reproducible under different parameters.
Beyond these external factors, one must also consider each participant’s willingness to learn the instrument. If one does not hold the desire or motivation to become musically educated, spending the required time playing the instrument does not necessarily correlate to positive results with development as those regions of the brain are not actively utilized.
Conclusion
Numerous studies have been carried out to demonstrate the varying benefits music training can provide for cognitive development. Extensive research has proven that the process of physically, mentally, and emotionally engaging oneself to learning an instrument can award diverse advantages to the maturing brain. The discipline and rigor needed to garner expertise in playing a musical instrument is unconsciously translated to the experimental setting. Likewise, the unique melodious tunes found in each piece sparks creativity to propose imaginative visions. While instrumental education does not fully account for Einstein, Heisenberg, Planck and McClintock’s scientific success, this extracurricular activity has been shown to provide a substantial boost in critical thinking.
References
- “The Symphony of Science.” The Nobel Prize, March 2019. https://www.symphonyofscience.com/vids.
- Miendlarzewska, Ewa A., and Wiebke J. Trost. “How Musical Training Affects Cognitive Development: Rhythm, Reward and Other Modulating Variables.” Frontiers in Neuroscience 7 (January 20, 2014). https://doi.org/10.3389/fnins.2013.00279.
- Patel, Aniruddh D. “Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis.” Frontiers in Psychology 2 (June 29, 2011): 1–14. https://doi.org/10.3389/fpsyg.2011.00142.
- Ho, Yim-Chi, Mei-Chun Cheung, and Agnes S. Chan. “Music Training Improves Verbal but Not Visual Memory: Cross-Sectional and Longitudinal Explorations in Children.” Neuropsychology 17, no. 3 (August 2003): 439–50. https://doi.org/10.1037/0894-4105.17.3.439.
- Jakobson, Lorna S., Lola L. Cuddy, and Andrea R. Kilgour. “Time Tagging: A Key to Musicians Superior Memory.” Music Perception 20, no. 3 (2003): 307–13. https://doi.org/10.1525/mp.2003.20.3.307.
- Slevc, L. Robert, and Akira Miyake. “Individual Differences in Second-Language Proficiency.” Psychological Science 17, no. 8 (2006): 675–81. https://doi.org/10.1111/j.1467-9280.2006.01765.x.
- Banich, Marie T. “Executive Function.” Current Directions in Psychological Science 18, no. 2 (April 1, 2009): 89–94. https://doi.org/10.1111/j.1467-8721.2009.01615.x.
- Hannon, Erin E., and Laurel J. Trainor. “Music Acquisition: Effects of Enculturation and Formal Training on Development.” Trends in Cognitive Sciences 11, no. 11 (November 2007): 466–72. https://doi.org/10.1016/j.tics.2007.08.008.
- Schellenberg, E. Glenn. “Music Lessons Enhance IQ.” Psychological Science 15, no. 8 (August 1, 2004): 511–14. https://doi.org/10.1111/j.0956-7976.2004.00711.x.
- Vaquero, Lucía, Paul-Noel Rousseau, Diana Vozian, Denise Klein, and Virginia Penhune. “What You Learn & When You Learn It: Impact of Early Bilingual & Music Experience on the Structural Characteristics of Auditory-Motor Pathways.” NeuroImage 213 (2020): 116689. https://doi.org/10.1016/j.neuroimage.2020.116689.
- Strait, D. L., S. Oconnell, A. Parbery-Clark, and N. Kraus. “Musicians Enhanced Neural Differentiation of Speech Sounds Arises Early in Life: Developmental Evidence from Ages 3 to 30.” Cerebral Cortex 24, no. 9 (2013): 2512–21. https://doi.org/10.1093/cercor/bht103.
- Zatorre, Robert J., Joyce L. Chen, and Virginia B. Penhune. “When the Brain Plays Music: Auditory–Motor Interactions in Music Perception and Production.” Nature Reviews Neuroscience 8, no. 7 (July 2007): 547–58. https://doi.org/10.1038/nrn2152.