Home » Biology (Page 8)
Category Archives: Biology
The Similarity of Human’s Microbiomes with Dogs
By Mangurleen Kaur, Biological Science, 23’
Author’s Note: In one of my classes of basic biology, I got to learn about microbes. That class discussed some relationships between microbes and between human beings. One of the points that stuck in my mind was the relationship of microbes between humans and one of our favorite pets, dogs. By researching this topic, I found it so astounding that I decided to write about it. I hope this piece will be interesting not only for science-lovers but also for the general public.
Both, inside and out, our bodies harbor a huge array of microorganisms. These microorganisms are a diverse group of generally minute life forms, which are called microbiota when they are found within a specific environment. Microbiota can refer to all the microorganisms found in an environment including bacteria, viruses, archaea, protozoa, and fungi. Furthermore, the collection of genomes from all the microorganisms found in a particular environment is referred to as a microbiome. According to the Human Microbiome Project (HMP), this plethora of microbes contribute more genes responsible for human survival than humans contribute. Researchers also estimated that the human microbiome consisted of 360 times more bacterial genes than human genes. Their results show that this contribution by microbes is critical for human survival. For instance, in the gastrointestinal tract bacterial genes are present, which allow humans to digest and absorb nutrients that otherwise would be unavailable. In addition to this, microbes also assist in the synthesis of many beneficial compounds, like vitamins and anti-inflammatory agents which our genome cannot produce. (4)
Where does this mini-ecosystem start from? The microbiome comes in our body as soon as we come out from the mother’s womb, we acquire them from the mother’s vagina and then, later on, by breastfeeding which plays a great role in making the microbes’ own unique community. There are several factors that influence the microbiome which include physiology, food, lifestyle, age, and environment. These are not only present in humans, but also in most animals and play a significant role in their health. For instance, gastrointestinal microorganisms exist in symbiotic associations with animals. Microorganisms in the gut assist in the digestion of feedstuffs, help protect the animal from infections, and some microbes even synthesize and provide essential nutrients to their animal host. This gives us an idea of how important these microorganisms are to our living system as a whole.(3)
Besides the human’s strong emotional connection with the dogs, there is also a biological relationship between the human and dog’s interactions. In context of this interesting relationship, research has been conducted. Computational biologist Luis Pedro Coelho and his colleagues at the European Molecular Biology Laboratory, in collaboration with Nestlé Research, studied the gut microbiome (the genetic material belonging to the microbiota) of beagles and retrievers. They found that the gene content of the dog’s microbiome showed more similarities to the human gut microbiome than to the microbiomes of pig or mice. When researchers mapped the gene content of the dog, mouse and pig microbiome in contrast to the human gut genes, they found that respectively 63%, 20% and 33% overlapped.(5) This shows the extensive similarities between human and dog’s gut microbiomes in comparison to other animals. Speaking on the discovery, Luis Pedro Coelho says: “We found many similarities between the gene content of the human and dog gut microbiome. The results of this comparison suggest that we are more similar to man’s best friend than we originally thought.” (1)
The University of Colorado Boulder did a study on the types of microbes present on the different parts of humans, to better understand the diversity and its significance for the human’s body. They conducted the study on 60 American families in which they sampled 159 people and 36 dogs. The team took samples from tongue, forehead, right and left palm and fecal samples to detect individual microbial communities. Through research, the researcher learned that people who own dogs are much more likely to share the same kinds of these “good” bacteria with their dogs. They have also learned that children who are raised with dogs are less likely than others to develop a range of immune-related disorders, including asthma and allergies. “One of the biggest surprises was that we could detect such a strong connection between their owners and pets,” said Knight, a faculty member at CU-Boulder’s BioFrontiers Institute.(6) The results found that adults who have a dog and they live together, share the greatest number of skin phylotypes while adults who neither have a dog nor live together share the least.
The University of Arizona is also conducting another research study, with some other universities including UC San Diego, in which they are seeking healthy people from Arizona age 50 or older, who have not lived with dogs for at least the past 6 months. Then they are selecting persons who would like to live with the assigned dogs. The goal of the study is to see whether the dogs enhance the health of older people and work as probiotics (good bacteria). But this research is ongoing and the outcomes are not yet released. Rob Knight, Professor of Pediatrics and Computer Science & Engineering at UC San Diego and his lab studied microbiomes. Knight and his team found that the microbial community on adult skin are on average more similar to those of their own dogs than to others. They also found that cohabiting couples share more microbes with one another if they have a dog, compared with couples who don’t have a dog. Their research suggests that a dog’s owner can be identified just by analyzing the microbial diversity of the dog and its human, as they share microbiomes. These studies are finding a critical relationship that is very helpful in microbiology and the overall health field in science. (2)
These studies reveal the various interesting relationships of microbiomes with us and other living beings. So far, the studies discussed how dog’s microbiomes are shared by the owner and how gene sequencing helps us to understand these connections. The growing understanding of this connection with microorganisms raises many other outstanding questions like what are the health benefits of a dog to a human? How can they help in preventing certain chronic diseases? This represents an exciting challenge for scientists and researchers to refine their understanding of microbiomes and find answers to these further emerging questions.
Work Cited
- “NIH Human Microbiome project defines normal bacterial makeup of the body”. National Institutes of Health, U.S. Department of Health and Human Services. www.nih.gov. Published on August 31, 2015. Acessed May 10, 2020
- Ganguly, Prabarna. “Microbes in us and their role in human health and disease”. www.Genome.gov. Published on May 29, 2019. Accessed May 10, 2020.
- “Dog microbiomes closer to humans’ than expected”. Research in Germany, Federal Ministry of Education and Research. www.researchingermany.org. Published on April 20, 2018. Accessed May 11, 2020.
- Trevino, Julissa. “A Surprising Way Dogs Are Similar to Humans.” www. Smithsonianmag.com. Published on April 23, 2018. Accessed February 11, 2020.
- Song, Se Jin, Christian Lauber, Elizabeth K Costello, Catherine A Lozupone, Gregory Humphrey, Donna Berg-Lyons, Gregory Caporaso, et al. “Cohabiting Family Members Share Microbiota with One Another and with Their Dogs.” eLife. eLife Sciences Publications, Ltd. elifesciences.org. Published on April 16, 2013. Accessed May 11, 2020.
- Sriskantharajah, Srimathy. “Ever feel in your gut that you and your dog have more in common than you realized?” www.biomedcentral.com. Published on April 11, 2018. Accessed February 11, 2020.
Use of Transgenic Fish and Morpholinos for Analysis of the Development of the Hematopoietic System
By Colleen Mcvay, Biotechnology, 2021
Author’s Note: I wrote this essay to review the methods of utilizing Zebrafish as a model for understanding the mechanisms underlying the development of blood (hematopoietic) stem cells, for my Molecular Genetics Class. I would love for readers to better understand how the use of transgenic zebrafish and morpholinos have advanced our knowledge on the embryonic origin, genetic regulation and migration of HSCs during early embryonic development.
Introduction
Hematopoietic stem cells, or the immature cells found in the peripheral blood and bone marrow, develop during embryogenesis and are responsible for the constant renewal of blood throughout an organism. Hematopoietic development in the vertebrate embryo arises in consecutive, overlapping waves, described as primitive and definitive waves. These waves are distinguished based on the type of specialized blood cells that are generated and each occurs in distinct anatomical locations (7). In order to visualize and manipulate these embryonic developmental processes, a genetically tractable model must be used. Although many transgenic animals provide adequate models for hematopoiesis and disease study, the zebrafish (Danio rerio) proves to be far superior because of their easily visualized and manipulated embryonic developmental processes (6). Through the use of diagrams and analysis, this discussion will expand upon the mechanisms of the development of hematopoietic stem cells and explain how this knowledge is enriched through the use of transgenic animals and morpholinos, such as the zebrafish.
The Zebrafish Model
The zebrafish (Danio rerio) model has proven to be a powerful tool in the study of hematopoiesis and offers clear advantages over other vertebrate models, such as the mouse (Mus musculus). These advantages include the conserved developmental and molecular mechanisms with higher vertebrates, the optical transparency of its embryo and larvae, the genetic and experimental convenience of the fish, its external fertilization allowing for in vivo visualization of embryogenesis, and its sequential waves of hematopoiesis (9). Additionally, zebrafish allow for clear visualization of the phenotypic changes that occur during the transition from the embryonic to adult stages. This is beneficial in understanding and visualizing the hematopoiesis sequential-wave mechanism, as explained below (8). Mouse models on the other hand are embryonic lethal for many hematopoietic transcription factors, meaning the cells die in the embryo, therefore inhibiting that same visualization (12).
An Overview of Hematopoietic Development
The development of blood in all vertebrates involves two waves of hematopoiesis: the primitive wave and the definitive wave (4). Primitive hematopoiesis, involving an erythroid progenitor (or a cell that gives rise to megakaryocytes and erythrocytes), happens during early embryonic development and is responsible for producing erythroid and myeloid cell populations (5). The primitive wave is transitory, and its main purpose is to produce red blood cells to assist tissue oxygenation. These erythroid progenitor cells first appear in blood islands in the extra-embryonic yolk sac, however, they are neither pluripotent nor do they have renewal ability (11). Later in development (varying points in development for different species), definitive hematopoiesis produces hematopoietic stem and progenitor cells (HSPCs), that generate multipotent blood lineages of the adult organism (7). The HSC’s originate in the aorta-gonad-mesonephros (AGM) region of the developing embryo, where they then migrate to the fetal liver and bone marrow [Figure 1.]
Figure 1: Stages of Embryonic Hematopoiesis
This figure shows the establishment of primitive and definitive hematopoietic stem cells (HSC) during embryonic development. The first HSC’s appear in the blood islands in the extraembryonic yolk sac. The primitive wave is transient, and the successive definitive wave starts intraembryonically in the aorta-gonad-mesonephros (AGM) region. The definitive HSC’s are multipotent and migrate to the fetal liver where they proliferate and seed bone marrow. There is a systematic circulation of the embryonic hematopoiesis.
Hematopoietic Development in the Zebrafish Model:
Like all vertebrates, zebrafish have sequential waves of hematopoiesis. However, hematopoiesis in zebrafish occurs in a distinct manner compared to other vertebrate models, with its primitive HSC’s being generated intra-embryonically, in the ventral mesoderm tissue called the intermediate cell mass (ICM) (2). Throughout this primitive wave, the anterior part of the embryo creates myeloid cells while the posterior creates mostly erythrocytes, both of which circulate throughout the embryo from 24 hours post-fertilization (10). The next step involves hematopoiesis occurring in the aorta-gonad mesonephros (AGM) region, which is followed by the emergence of the HSC’s from the ventral wall of the dorsal aorta. The HSC’s then migrate to the posterior region in the tail called the caudal hematopoietic tissue (CHT). Finally, from 4 days post-fertilization, lymphopoiesis initiates in the thymus and HSC’s move to the kidney marrow (functionally equivalent to bone marrow in mammals) (11) (10)[Figure 2]. Although the anatomical sites of hematopoiesis are different in zebrafish and mammals, the molecular mechanisms and genetic regulation are highly conserved, permitting translation to mammals in many different ways. First, because zebrafish embryos can survive without red blood cells for a long time by passive diffusion, they are ideal for the identification of mutations that would be embryonic lethal in mice (9). These zebrafish mutants have been able to reveal genes that are critical components of human blood diseases and allow for the recognition of toxicity and embryonic lethality at an early stage of drug development. Additionally, the zebrafish model is amenable to forward genetic screens that are infeasible in any other vertebrate model simply due to cost and space requirements. Finally, zebrafish embryos are permeable to water-soluble chemicals, making them ideal for high-throughput screenings of novel bioactive compounds.
A:
B:
Figure 2: Hematopoiesis Development in the Zebrafish Model
A.) In embryonic zebrafish development, the sequential sites of hematopoiesis. Development first occurs in the intermediate cell mass (ICM), next in the aorta-gonad-mesonephros (AGM), and then in the causal hematopoietic tissue (CHT). Later hematopoietic cells are expressed in the thymus and kidney (Modified from Orkin and Zon, 2008).
B.) Timeline for the developmental windows for hematopoietic sites in the zebrafish (Modified from Orkin and Zon, 2008).
Transgenic Zebrafish & Morpholinos to Understand Genetic HSC Regulation and Migration
Transgenic zebrafish and morpholinos are easily manipulated and visualized through microinjection, chemical screening, and mutagenesis, all of which aid in identifying hematopoietic gene mutations and understanding gene regulation and migration in a vertebrate model. Epigenetic analysis of these mutations (through RNA sequencing, CHIP, microarray, and selective inhibition of a gene) have identified critical components to blood development that describe both the functions of these genes within hematopoiesis, and also describe phenotypes associated with defective development (1). Morpholinos target sequences at the transcriptional start site and allow for the selective inhibition of a targeted gene and analysis of the regulatory sequences in this mutant (Martin et al., 2011). Large-scale screening techniques (chemical suppressor screens, etc) of these mutations have identified many small molecules capable of rescuing hematopoietic defects and stopping disease, along with identifying new pathways of regulation (9). Although transgenic organisms have different origin sites and migratory patterns than mammalian hematopoiesis, the genetic regulation of HSC development and lineage specification is conserved, allowing for insights into the pathophysiology of disease.
Conclusion
The zebrafish is an invaluable vertebrate model for studies of hematopoiesis because of its amenability to genetic manipulations and its easily viewed embryonic developmental processes. This organism has become increasingly important in understanding the genetic and epigenetic mechanisms of blood cell development and the information produced is vital for the translation into regenerative medicine applications. Although more research is needed into specifics of HSC differentiation and self-renewal, zebrafish sufficiently allow for newly identified mutations and translocations of human hematopoietic diseases and cancers to be visualized and analyzed, unlike any other model organism. With this analysis, a more complete understanding of the molecular mechanisms of certain hematopoietic diseases can be made, thus aiding in the process of new treatments.
References
- Boatman S, Barrett F, Satishchandran S, et al. Assaying hematopoiesis using zebrafish. Blood Cells Mol Dis 2013;51:271–276
- Detrich H. W et al. (1995). Intraembryonic hematopoietic cell migration during vertebrate development. Proc Natl Acad Sci USA 92: 10713- 10717.
- E. Dzierzak and N. Speck, “Of lineage and legacy: the development of mammalian hematopoietic stem cells,” Nature Immunology, vol. 9, no. 2, pp. 129–136, 2008.
- Galloway J. L., Zon L. I. (2003). Ontogeny of hematopoiesis: examining the emergence of hematopoietic cells in the vertebrate embryo. Curr. Top. Dev. Biol. 53, 139-158
- Kumar, Akhilesh et al. “Understanding the Journey of Human Hematopoietic Stem Cell Development.” Stem cells international vol. 2019 2141475. 6 May. 2019, doi:10.1155/2019/2141475
- Gore, Aniket V et al. “The zebrafish: A fintastic model for hematopoietic development and disease.” Wiley interdisciplinary reviews. Developmental biology vol. 7,3 (2018): e312. doi:10.1002/wdev.312
- Jagannathan-Bogdan, Madhumita, and Leonard I Zon. “Hematopoiesis.” Development (Cambridge, England) vol. 140,12 (2013): 2463-7. doi:10.1242/dev.083147
- de Jong, J.L.O., and Zon, L.I. (2005). Use of the Zebrafish System to Study Primitive and Definitive Hematopoiesis. Annu. Rev. Genet. 39, 481–501.
- Jing, L., and Zon, L.I. (2011). Zebrafish as a model for normal and malignant hematopoiesis. Dis. Model. &Amp; Mech. 4, 433 LP – 438.
- Orkin, Stuart H, and Leonard I Zon. “Hematopoiesis: an evolving paradigm for stem cell biology.” Cell vol. 132,4 (2008): 631-44. doi:10.1016/j.cell.2008.01.025
- Paik E. J., Zon L. I. (2010). Hematopoietic development in the zebrafish. Int. J. Dev. Biol. 54, 1127-1137
- Sood, Raman, and Paul Liu. “Novel insights into the genetic controls of primitive and definitive hematopoiesis from zebrafish models.” Advances in hematology vol. 2012 (2012): 830703. doi:10.1155/2012/830703
The Wood Wide Web: Underground Fungi-Plant Communication Network
By Annie Chen, Environmental Science and Management ’19
Author’s note: When people think of ecosystems, trees and animals usually come to mind. However, most often we neglect an important part of the ecosystem — Fungi. Without us noticing, the fungi stealthily connects the organisms underground, creating a communication network that helps organisms interact with one another.
Picture yourself walking your dog in a quiet, peaceful natural forest, where you imagine the two of you as the only organisms capable of interacting with one another here. However, you are not alone; the plants can communicate and those trees and grasses are always speaking to each other without you taking notice. The conversation between vascular plants in this forest started before any of us are old enough to remember, and will likely continue if this forest is untouched. These conversations between seemingly disconnected organisms have helped this forest survive and thrive to become what you see today. You might wonder, what do these plants talk about, and most importantly, how do the plants communicate if they cannot move freely and have no vocal cords? The secret lies underground in an extensive network.
The underground network connects different immobile creatures to one another. Much like the above-ground biological interactions, the underground ecosystem is diverse, and not only houses many animals, but also consists of roots of different plants, bacteria, and fungal mycelium. The plant roots interact with their immediate neighbors, but in order for plants to communicate with plants further away from them, they rely on the underground fungal network, or according to Dr. Suzanne Simard who popularized the idea, the “Wood Wide Web” (WWW).
What is the underground “Wood Wide Web”, and how is it built?
This communication network is not made up of invisible radio waves like our Wifi, but rather relies on a minuscule and dense fungi network to deliver various signals and information [6]. These fungi, using their branching and arm-like membranes, build a communication network called the mycelium that connects between individual plants, and even the whole ecosystem. The mycelium deliver nutrients, sugar, and water, and in a more complex dynamic with the plants, deliver chemical signals. The fungi’s ability to expand their mycelium through reproduction and growth of fungi individually helps build these connections within the network. To expand their mycelium and link the network together with different individual plants, it must be evolutionarily advantageous for the fungi species to create such an extensive system. That is where the plant roots and their cooperative interactions come into play.
This communication network builds upon the foundation of mutualistic relationships between plants and fungi called mycorrhizae. Mutualism is the relationship that allows plants to provide sugars for the fungi in exchange for limiting nutrients such as phosphorus, nitrogen, and sometimes water (figure 1). According to an article published by Fleming, around 80-90% of the earth’s vascular plants have this mutualistic relationship, which allows plants and fungi to connect with one another through the plant roots. Without the mutually beneficial relationship, the fungi are not obligated to expand their network to connect to the plant roots and “help” these plants deliver chemical signals.
Not only is there nutrient and informational exchange, but the plants benefit from fungi priming, where the initial fungi infection that creates the exchange interface between plant roots and fungi cells force the plant immune systems to increase immunity. The increased immunity in the infected plant indirectly increases the chances of the plants in resisting major changes such as a disease in the ecosystem [6]. This continuous plant-fungi network through nutrient exchange and strengthening each species’ survival connects the whole ecosystem together.
Figure 1: A simplified visual of species interactions within the fungal network.
(Source: BBC)
That being said, the plant and fungal species that make up the WWW can vary by the participants who built the ecosystems. The interaction also means that the plants can selectively provide carbon or release defense chemicals to decide which fungus remains and has a mutualistic relationship with them [1]. When introducing a non-native species, it can alter the new ecosystem by encouraging different types of mycorrhiza. One such example was the introduction of European cheatgrass in Utah, U.S.A. The mycorrhizal makeup in Utah initially does not have significant changes prior to the introduction. However, upon European cheatgrass introduction to the Utahn site, despite the cheatgrass that does not contain European fungi, the site showed a shifted fungi genetic makeup [8]. Each plant individual, or species, using their preferences and abilities to “choose” their mutualistic partners, can diversify the fungal network to become more extensive and powerful, both to benefit and to harm other species of the ecosystem. The interspecies perspective is important in understanding the WWW.
Plants talk and interact through the “Wood Wide Web”
The communication extends to others in the ecosystem — the plants can “speak” to each other interspecifically, too. The individuals in an ecosystem are closely linked to one another, and so are relationships between plant individuals, whether it is directly with each other, indirectly through the fungal network, or both. The indirect communication relies on the fungal network, where various chemical signals pass through. For instance, the increased phosphorus level in the soil signals other plants that there is a plant-fungal interaction, and they may respond to this signal in different ways to ensure the situation is to their advantage — they could try to have their share of nutrients by producing sugars to attract these types of fungi, or they could make their plant competitors less healthy by excreting chemicals to weaken the fungus’ abilities to provide nutrients [13]. The WWW provides an internet that allows the plants to select a variety of methods to interact with one another, near or far.
The plants can choose to actively help each other through this fungal network, and allow both individuals, or species, to thrive in the ecosystem. Evolutionarily speaking, a plant individual could benefit from their own kind to thrive for the benefit of their survival. When an individual plant is thriving and producing excess carbon, they can help other plants by transferring excess nutrients through the fungal network [6]. An older, dying tree can also choose to transfer its resources to the younger neighbors through the fungal network, or donate its stored nutrients to the entire ecosystem through the decaying process that is aided by the extensive fungal network from fungal hyphae growth over the material [5]. Furthermore, through the WWW, the plants are able to communicate with one another about the possible threats including herbivores and parasitic fungi. In the research of Song et al, tomato plants infected with pathogens are able to send various defensive chemical signals, such as enzymes, into the existing fungal network for healthy neighbors in the network, warning them of the dangers nearby before they are infected themselves; using this mechanism, the plants can concentrate defensive chemicals with neighbors to minimize the spreading of this parasitic fungus in the area.
Not only can plants benefit one another, they can also use this network to put others at a disadvantage, such as to wipe out another competing or predating species that threaten their own survival. Allelopathy, or the exuding of chemicals to ward off enemies, usually gives off the impression that the plants use this method to discourage herbivores from consuming them, such as the milky sap that causes skin rashes and inflammation when a cucumber vine is cut, but the allelopathy is also active underground through the WWW. Barto et al, through their research on allelopathy, shows that even within a disturbed habitat, when there is competition between plant species, one species may utilize the regional network of fungi attached to them to deliver allelochemicals from one plant species to its neighboring species, preserving the fitness of their own kind.
Passive Animals, Active Plant and Fungi
We always think of herbivores as active players impacting the ecosystem. In the WWW, they are the last to respond to changes. Plants and fungi signal each other when an herbivore is present in the network, well before it has established its presence in the neighboring plants. Fungi are an important and active part of this ecosystem because they can also choose to exclude herbivores through chemical allelopathy. While it is possible that the fungi can choose to colonize a separate species that provides more benefits for them, they can concentrate their energy on defending its current host. Before the herbivore can expand its population, the plants have already communicated with one another through excretion of allelopathic chemicals, not only to ward off the herbivores that are causing potential damage, but also to warn other plants of the herbivore presence [1]. The fungi colonization of two nightshade species, Solanum ptycanthum and Solanum dulcamara, showed an increase of defense protein levels against the feeding caterpillars. This is just one example of herbivory defense mechanisms that results in decreased predator fitness, specifically in reduced growth rate and feeding rates [11]. When caterpillars feed on the Solanum spp., the active players in this relationship, the fungi and their plant hosts use chemical defense mechanisms indirectly induced by the fungi to discourage herbivores from feeding, and through evolution, eventually they drive out predators who are disadvantageous to the fungi-plant fitness.
Alone without the Wood Wide Web: Human Impacts
The network is built on a web of hyphae connections that is barely visible to the human eye, and even more vulnerable to changes. Older ecosystems not only have a higher percentage of larger trees with broader root systems, but are also denser in number, which both lead to a more extensive mycorrhizal fungal network. The species diversity, on top of age and density, contributes to a complex and healthy WWW that supports all plants in the ecosystem [3]. However, a disturbed ecosystem severs the connections in this network, making the previously extensive system difficult to repair.
Human activities that disturb the soil can affect this fragile yet powerful connection: seasonal tilling in agriculture, intensive logging, and change of soil chemistry and structure by laying concrete inhibits the soil from building an extensive web. Physically turning and chemically altering the soil is a direct human impact that cuts off hyphae connections between plant individuals in the system. According to Dr. Simard’s statement in a Biohabitat interview, the urban plants are less healthy because they lack the WWW to help them thrive through nutrient, water, and chemical signal exchange; they must do all those things or rely on humans to provide these needs. Indirectly, the larger-scale deaths and removal of plant individuals from logging also no longer foster a healthy mutualistic relationship between plants and fungi. If an individual plant is prevented from connecting with its mutualistic partners, whether that is through disturbance in the soil or the death of these partners, prevent the extensiveness of the WWW, and the isolation makes the urban tree population vulnerable to diseases if the humans do not diligently maintain them.
It is true that the smaller versions of the WWW still develop between periods of disturbance, proven indirectly by the fungal colonization ability in off-site lab experiments such as those included in the studies of Barto et al and Hawkes et al. However, important interspecies collaborations and perhaps lack thereof that is missing compared to a minimally disturbed habitat functions much better in resisting climate change and increased foreign and invasive species that threaten the health of the ecosystem.
Fortunately, despite the growing demand of land produced by economic growth and population, there is an increased awareness of the importance of plants and the health of the ecosystem. Over the last two decades, the addition of policies and practices indicated that major western conservation agencies have started to take on an interspecies perspective. One notable example is the inclusion of ecosystem management in the Clean Water Act, which adapts to the notion that endangered flora or fauna species is dependent on the health of an ecosystem [14]. The increased understanding of how interconnected the flora species are, in addition to conservation methods that have existed before the western colonization, have changed how governments aim to preserve nature.
Regardless of the level of human impacts, the WWW holds important communication between plants, fungi, and herbivores through chemical signals and nutrient exchange to sustain or to outcompete each other. The connectivity to relay information within this network is key to the healthy plant community, and further the health of the ecosystem. Next time when you walk your dog in the woods, remember that the plants around you are capable of communicating thanks to this underground network. In order to keep this forest healthy for generations on, it is up to us to rethink development strategies to preserve this network that helps them thrive to continue the species’ communication in the WWW in this forest.
References
- Biere, Arjen, Hamida B. Marak, and Jos MM van Damme. “Plant chemical defense against herbivores and pathogens: generalized defense or trade-offs?.” Oecologia 140.3 (2004): 430-441.
- Barto, E. Kathryn, et al. “The fungal fast lane: common mycorrhizal networks extend bioactive zones of allelochemicals in soils.” PLoS One 6.11 (2011): e27195.
- Beiler, Kevin J., et al. “Architecture of the wood‐wide web: Rhizopogon spp. genes link multiple Douglas‐fir cohorts.” New Phytologist 185.2 (2010): 543-553.
- Belnap, Jayne, and Susan L. Phillips. “Soil biota in an ungrazed grassland: response to annual grass (Bromus tectorum) invasion.” Ecological applications 11.5 (2001): 1261-1275.
- Biohabitats. “Expert Q&A: Suzanne Simard.” Biohabitats Newsletter 14.4 (2016).
- Fleming, Nic. “Plants talk to each other through a network of fungus.” BBC Earth. (2014).
- Gehring, Catherine, and Alison Bennett. “Mycorrhizal fungal–plant–insect interactions: the importance of a community approach.” Environmental entomology 38.1 (2009): 93-102.
- Hawkes, Christine V., et al. “Arbuscular mycorrhizal assemblages in native plant roots change in the presence of invasive exotic grasses.” Plant and Soil 281.1-2 (2006): 369-380.
- Hawkes, Christine V., et al. “Plant invasion alters nitrogen cycling by modifying the soil nitrifying community.” Ecology letters 8.9 (2005): 976-985.
- Macfarlane, Robert. “The Secrets of the Wood Wide Web.” The New York Times. (2016).
- Minton, Michelle M., Nicholas A. Barber, and Lindsey L. Gordon. “Effects of arbuscular mycorrhizal fungi on herbivory defense in two Solanum (Solanaceae) species.” Plant Ecology and Evolution 149.2 (2016): 157-164.
- Song, Yuan Yuan, et al. “Interplant communication of tomato plants through underground common mycorrhizal networks.” PloS one 5.10 (2010): e13324.
- Van der Putten, Wim H. “Impacts of soil microbial communities on exotic plant invasions.” Trends in Ecology & Evolution 25.9 (2010): 512-519.
- Doremus, H., Tarlock, A. Dan. “Can the Clean Water Act Succeed as an Ecosystem Protection Law?” George Washington Journal of Energy and Environmental Law 4 (2013): 49.
Not All Heroes Wear Capes: How Algae Could Help Us Fight Climate Change
By Robert Polon, Biological Sciences Major, ’21
Author’s Note: In my UWP 102B class, we were assigned the task of constructing a literary review on any biology-related topic of our choice. A year ago, in my EVE 101 class, my professor briefly mentioned the idea that algae could be used to sequester atmospheric carbon dioxide in an attempt to slow the rate of climate change. I found this theory very interesting, and it resonated with me longer than most of the other subject matter in that class. I really enjoyed doing the research for this paper, and I hope it gives people some hope for the future. I’d like to thank my UWP professor, Kathie Gossett, for pointing me in the right direction throughout the process of writing this paper.
Abstract
With climate change growing ever more relevant in our daily lives, scientists are working hard to find solutions to slow and reverse the damage that humans are doing to the planet. Algae-based carbon sequestration methods are a viable solution to this problem. Photosynthesis allows algae to remove carbon dioxide from the atmosphere and turn it into biomass and oxygen. It has even been proposed that raw algal biomass can be harvested and used as a biofuel, which can provide a greener alternative to fossil fuel usage. Though technology is not yet developed enough to make this change in our primary fuel source, incremental progress can be taken to slowly integrate algal biofuel into daily life. Further research and innovation on the subject could allow full-scale replacement of fossil fuels with algal biofuel to be a feasible option. Methods of algal cultivation include open-ocean algal blooms, photo bioreactors, algal turf scrubbing, and BICCAPS (bicarbonate-based integrated carbon capture and algae production system). There are many pros and cons to each method, but open-ocean algal blooms tend to be the most popular because they are the most economical and produce the most algae, even though they are the most harmful to the environment.
Keywords
Algae | Biofuel | Climate Change | Carbon Sequestration
Introduction
As we get further into the 21st century, climate change becomes less of a theory and more of a reality. Astronomically high post-Industrial Revolution rates of greenhouse gas emissions have started to catch up with humans, as the initial consequences of these actions are now coming to light with fear that worse is on the way. Many solutions have been proposed to decrease greenhouse gas emissions, but very few involve fixing the damage that has already been done. It has been proposed that growing algae in large quantities could help solve this climate crisis.
According to the Environmental Protection Agency, 76% of greenhouse gas emissions come in the form of carbon dioxide. As algae grows, it removes carbon dioxide from the atmosphere by converting it to biomass and oxygen via photosynthesis. Algae convert carbon dioxide to biomass at relatively fast rates. On average, one kilogram of algae utilizes 1.87 kilograms of CO2 daily, which means that one acre of algae utilizes approximately 2.7 tons of CO2 per day [1]. For comparison, one acre of a 25-year-old maple beech-birch forest only utilizes 2.18 kilograms of CO2 per day [2]. This amount of carbon dioxide sequestration can be done by only 1.17 kilograms of algae. After its photosynthetic purpose has come to an end, the raw algal biomass can be harvested and used as an environmentally-friendly biofuel. This literary review will serve as a comprehensive overview of the literature on this proposal to use algae as a primary combatant against global warming.
Carbon Dioxide
For centuries, heavy usage of fossil fuels has tarnished Earth’s atmosphere with the addition of greenhouse gases [3]. These gases trap heat by absorbing infrared radiation that would otherwise leave Earth’s atmosphere. This increases the overall temperature of the earth, which leads to the melting of polar ice caps, rising sea levels, and strengthening of tropical storm systems, among many other devastating environmental effects [4]. The most commonly emitted greenhouse gas, carbon dioxide, tends to be the primary focus of global warming treatments.
These algal treatment methods are no different. Any algal treatment option is dependent upon the fact that algae sequester atmospheric carbon dioxide through photosynthesis. It converts carbon dioxide into biomass and releases oxygen into the atmosphere as a product of the photosynthetic process [5].
Algal Cultivation
There are four proposed methods of algal cultivation: open-ocean algal blooms, photobioreactors, algal turf scrubbing, and BICCAPS. These techniques all differ greatly, with various benefits and drawbacks to each.
Open-Ocean Algal Blooms
Algae is most abundant on the surface of the open ocean. With the addition of its limiting nutrient, iron, in the form of iron(ii) sulfate (FeSO4), massive algal blooms can be easily sparked anywhere in the ocean [3]. This seems to be the way that most scientists envision sequestration because, of all proposed cultivation techniques, this one produces the most algae in the least amount of time. Intuitively, this method removes the most carbon dioxide from the atmosphere, as the amount of CO2 removed is directly proportional to the quantity of algae undergoing photosynthesis.
There are many benefits to open-ocean algal blooms. There is no shortage of space on the surface of the ocean, so, hypothetically, there is a seemingly infinite amount of algal mass that can be cultivated this way. This technique is also very cost-efficient, as all you need to employ it is some iron(ii) sulfate and nature will do the rest [3].
Once the algal bloom has grown to its maximum size, there is an overabundance of algal biomass on the surface of the ocean. Some researchers have proposed that this mass be collected and used as a biofuel [5,6,7]. Others have proposed that we let nature play its course and allow the dead algae to sink to the bottom of the ocean. This ensures that the carbon dioxide it has taken out of the atmosphere is stored safely at the bottom of the ocean [8]. Here, the algal biomass is easily accessible for consumption by shellfish, who store the carbon in their calcium carbonate shells [3].
This solution is not an easy one to deploy, however, because algal blooms bring many problems to the local ecosystems. Often referred to as harmful algal blooms (HABs), these rapidly growing algae clusters are devastating to the oceanic communities they touch. They increase acidity, lower temperature, and severely deplete oxygen levels in waters they grow in [9]. Most lifeforms aren’t prepared to handle environmental changes that push them out of their niches, so it’s easy to see why HABs kill significant portions of marine life.
HABs can affect humans as well. Many species of alga are toxic to us, and ingestion of contaminated fish or water from areas affected by these blooms can lead to extreme sickness and even death. Some examples of these diseases are ciguatera fish poisoning, paralytic shellfish poisoning, neurotoxic shellfish poisoning, amnesic shellfish poisoning, and diarrheic shellfish poisoning [10]. The effects of harmful algal blooms have only been studied in the short-term, but from what we have seen, they are definitely a barrier in using this form of algae cultivation [11].
Photobioreactors
Photobioreactors are another frequently-proposed tool for cultivating algae. These artificial growth chambers have controlled temperature, pH, and nutrient levels that make for optimal growth rates of algae [12]. They can also run off of wastewater that is not suitable for human consumption. Photobioreactors minimize evaporation and, with the addition of iron, magnesium, and vitamins, increase rates of carbon dioxide capture are increased [1]. Due to the high concentration of algae in a relatively small space, photobioreactors have the highest rates of photosynthesis (and subsequently carbon dioxide intake) out of all of the cultivation methods mentioned in this paper.
This innovative technology was driven primarily by the need to come up with an alternative to triggering open-ocean algal blooms. Photobioreactors eliminate pollution and water contamination risks that are prevalent in harmful algal blooms. Furthermore, they make raw algal biomass easily accessible for collection and use as a biofuel, which open-ocean algal blooms do not [12].
The main drawback to this method is that the cost of building and maintaining photobioreactors is simply too high to be economically feasible right now [12]. Technological developments need to be made to lower the short-term cost of operation and allow for mass production if we want to use them as a primary source of carbon sequestration. Their long-term economic feasibility still remains unknown, as most of the cost is endured during the production of the photobioreactors. Money is made back through the algae cultivated, but the technology hasn’t been around long enough to show concrete long-term cost-benefit analyses without speculation [14].
Algal Turf Scrubbing (ATS)
Proposed in 2018 by botanist Walter Adey, algal turf scrubbing (ATS) is a new technique created to efficiently cultivate algae for use in the agriculture and biofuel industries. The process involves using miniature wave generators to slightly disturb the flat surface of a floway and stimulate the growth of numerous algal species in the water. Biodiversity in these floways increases over time, and a typical ATS floway will eventually have over 100 different algae species [11].
Heavy metals and other toxic pollutants occasionally make their way into the floways; however, they are promptly removed, to ensure that the product is as nontoxic as possible. The algal biomass is harvested bi-weekly and has a variety of uses. Less toxic harvests can be used as fertilizers in the agricultural industry, which research claims is the most economically efficient use for the harvest. It can also go towards biofuel use, although the creators of the ATS system believe the majority of their product will go towards agricultural use because they will not be able to produce enough algae to keep up with the demand (if our society moves towards using it as a biofuel) [11].
The problems with ATS are not technological, but sociopolitical, as the research team behind it fears that they will not get the funding and resources needed to perform their cultivation at an effective level [11].
BICCAPS
The bicarbonate-based integrated carbon capture and algae production system (BICCAPS) was proposed to reduce the high costs of commercial algal biomass production by recycling bicarbonate that is created when algae capture carbon dioxide from the atmosphere and using it to culture alkalihalophilic microalgae (algae that thrive in a very basic pH above 8.5). Through this ability to culture more algae, the system should, in theory, cut costs of carbon capture and microalgal culture. It is also very sustainable, as it recycles nutrients and minimizes water usage. The algae cultivated can also be turned into biofuel to lower fossil fuel usage [13].
The main drawback to this closed-loop system is that it does not cultivate as much algae as the other systems, though work is currently being done to improve this. It has been proven that the efficiency of BICCAPS significantly improves with the addition of sodium to the water, which stimulates the growth of alkalihalophilic microalgae [13]. This means that, with a little bit of improvement to the efficiency of the system, BICCAPS could become a primary algal biomass production strategy because of its low cost and sustainability.
Use of Algae as a Biofuel
While algae may not have the energetic efficiency of fossil fuels, it is not far behind. It can be burned as a biofuel to power transportation, which would allow us to lower our use of fossil fuels and, subsequently, our greenhouse gas emissions. When dry algal biomass is burned, it releases more oxygen and less carbon dioxide than our current fuel sources. The increase in oxygen released into the atmosphere not only helps to lower CO2 emissions but increases the overall atmospheric ratio of oxygen to carbon dioxide. More research still has to be done to find the best possible blend of algal species for fuel consumption [12]. Solely using algae as a biofuel would not meet the world’s energy demand, but the technology for photobioreactors continues to improve, giving hope to one day use algae more than fossil fuels [6].
A common counterargument to proposals for algal biofuel usage is that burning dry algae only provides half the caloric value of a similarly-sized piece of coal. While this is true, it should be taken into consideration that that coal has an extraordinarily high caloric value and that the caloric value of algae is still considered high relative to alternative options [3].
It is often suggested that bioethanol, which essentially uses crops as fuel, should be used over algal biofuel. The main problem with this proposal is that farmers would spend more time cultivating inedible crops because they make for better fuel. This would lead to food shortages on top of the current hunger problem in our world. Farming crops also take up land, while growing algae does not [7].
Drawbacks
The main problems associated with using algae as a biofuel are technological and economical. We simply do not have the technology in place right now to produce enough algae to completely replace fossil fuels with it. In order to do this, we would have to establish full-scale production plants, which is not as economically viable as simply continuing to use the fossil fuels that degrade our planet [12]. Receiving funding for the commercialization of algae is the biggest obstacle this plan faces. It’s difficult to get money allocated to environmental conservation efforts because, unfortunately, it doesn’t rank very highly in our government’s priorities. Algal carbon sequestration has also never been observed at a commercial scale, so there is hesitation to fully commit resources to something that seems like a gamble.
Alternative Uses
It has also been proposed that algal biomass grown to sequester carbon dioxide should be used in the agricultural industry. As previously mentioned, the creators of ATS have suggested using it as a fertilizer [11]. Others say that it can be used to feed livestock or humans, as some cultures actually consume algae already [12]. The seemingly infinite supply of microbes can also be harvested and used heavily in the medical industry in the form of antimicrobial, antiviral, anti-inflammatory, anti-cancer, and antioxidant treatments [7].
Conclusion
Algae can be used to fight climate change because it removes carbon dioxide from our atmosphere, stores it as biomass, and replaces it with oxygen. Arguments have been made in many directions over the best method of algal cultivation. Triggering open-ocean algal blooms is certainly the most cost-efficient of these methods, and it produces the most algal biomass. The problem with using this technique is that these algal blooms have devastating ecological effects on the biological communities they come in contact with. Photobioreactors are another popular method among those who favor this strategy because of their ability to efficiently produce large quantities of algae; however, the main inhibition to their usage is the extremely high cost of construction and operation. With more focus on developing lower cost photobioreactors, they can potentially become the primary source of algal growth. Algal turf scrubbing is another strategy of algae cultivation that struggles with the problem of acquiring adequate funding for the operation. BICCAPS is a relatively inexpensive and eco-friendly way to grow algae in a closed system, but it yields low quantities of algal biomass compared to the other systems.
The raw algal biomass from these growth methods can potentially be used as a biofuel. Dry alga has a high caloric value, which makes it great for burning to power equipment. It does not burn as well as fossil fuels, but it does release more oxygen and less carbon dioxide than fossil fuels when burned. Of course, funding will be needed for increased algae production to make this a possibility, but with more research and advances in the field, algal growth would be a great way to remove large amounts of carbon dioxide that is stuck in Earth’s atmosphere and become our primary fuel source down the line.
References
- Anguselvi V, Masto R, Mukherjee A, Singh P. CO2 Capture for Industries by Algae. IntechOpen. 2019 May 29.
- Toochi EC. Carbon sequestration: how much can forestry sequester CO2? MedCrave. 2018;2(3):148–150.
- Haoyang C. Algae-Based Carbon Sequestration. IOP Conf. Series: Earth and Environmental Science. 2018 Nov 1. doi:10.1088/1755-1315/120/1/012011
- Climate Science Special Report.
- Nath A, Tiwari P, Rai A, Sundaram S. Evaluation of carbon capture in competent microalgal consortium for enhanced biomass, lipid, and carbohydrate production. 3 Biotech. 2019 Oct 3.
- Ghosh A, Kiran B. Carbon Concentration in Algae: Reducing CO2 From Exhaust Gas. Trends in Biotechnology. 2017 May 3:806–808.
- Kumar A, Kaushal S, Saraf S, Singh J. Microbial bio-fuels: a solution to carbon emissions and energy crisis. Frontiers in Bioscience. 2018 Jun 1:1789–1802.
- Moreira D, Pires JCM. Atmospheric CO2 capture by algae: Negative carbon dioxide emission path. Bioresource Technology. 2016 Oct 10:371–379.
- Wells ML, Trainer VL, Smayda TJ, Karlson BSO, Trick CG. Harmful algal blooms and climate change: Learning from the past and present to forecast the future. Harmful Algae. 2015;49:68–93.
- Grattan L, Holobaugh S, Morris J. Harmful algal blooms and public health. Harmful Algae. 2016;57:2–8.
- Calahan D, Osenbaugh E, Adey W. Expanded algal cultivation can reverse key planetary boundary transgressions. Heliyon. 2018;4(2).
- Adeniyi O, Azimov U, Burluka A. Algae biofuel: Current status and future applications. Renewable and Sustainable Energy Reviews. 2018;90:316–335.
- Zhu C, Zhang R, Chen L, Chi Z. A recycling culture of Neochloris oleoabundans in a bicarbonate-based integrated carbon capture and algae production system with harvesting by auto-flocculation. Biotechnology for Biofuels. 2018 Jul 24.
- Richardson JW, Johnson MD, Zhang X, Zemke P, Chen W, Hu Q. A financial assessment of two alternative cultivation systems and their contributions to algae biofuel economic viability. Algal Research. 2014;4:96–104.
Environmental Effects of Habitable Worlds on Protein Stability
By Ana Menchaca, Biochemistry and Molecular Biology ‘20
Author’s Note: As a biochemistry major hoping to further pursue an academic career in astrobiological research, this paper jumped out at me when finding a topic for a class assignment. It goes to show just how many paths there are to take in investigating life elsewhere in the universe and how much we still have yet to discover and understand.
The search for life elsewhere is a vast, challenging undertaking, and investigation of conditions on so deemed habitable worlds provides insight into our current understanding of the existence of life. The conditions for a world to be considered potentially habitable are similar to those of life on Earth. These conditions include a source of energy, common essential elements that make up life (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), and a solvent for chemical interactions (e.g. H2O). Understanding how chemicals and molecular components might interact with these environments can provide us with a better understanding of what could actually hold the potential for life as we currently know it. One of these important molecular components is proteins.
Research exploring protein stability on Saturn’s largest moon, Titan, published in November 2019, presents an early foray into these considerations. This research studies the implications of variable environmental conditions on protein interactions and how this could detect potentially habitable worlds, similar to Earth. The study makes a foray into studying whether the conditions deemed necessary on Earth are really necessary for the survival of proteins. Molecular dynamics simulations explored structural interactions based on our current knowledge of protein interactions, using the software package GROMACS [1].
Titan is one of these potentially habitable candidates in our solar system, a category that also includes several other moons: Europa, Enceladus, Ganymede, and Callisto. All of these moons are considered to have subsurface oceans present, conditions that have the potential for the presence of chemical building blocks, liquid H2O, and sources of energy. Models show that Titan’s subsurface oceans may contain hydrogen, carbon, nitrogen, and ammonia (ensuring that the oceans remain liquid), thus indicating the potential for Earth-like biochemistry [1].
Martin et al. explored the potential effects of Titan’s hypothesized environment on the integrity of biologically relevant molecules. Protein compactness, flexibility, and backbone dihedral angle distributions were measured. Because protein folding is affected by affinity and electrical interactions with the environment that they’re in, the difference in the conditions of Titan’s high-pressure subsurface oceans to those on Earth has the potential to affect the folding and behavior of relevant proteins [1]. Even on Earth, extremophiles in hydrothermal vents have varying versions of common proteins, something that could indicate the potential for novel protein conformations that perform similar functions in unique, more extreme conditions than those on Earth. The potential existence of these conformers, which still act and provide the same structures and functions of Earth proteins, broadens and changes our scope of what’s necessary and indicative of life.
In comparing Titan-like conditions to Earth, Martin et al. observed variations in the behavior of selected proteins (which were selected to highlight common folding of alpha helices and beta sheets). Proteins are formed at various levels of structure: primary, secondary and tertiary. The primary structure is the amino acid sequence, secondary structure is created by interactions of the polypeptide with itself, and tertiary structure is the proteins three dimensional, overall folding. Alpha helices and beta sheets are the most prevalent secondary structures for proteins on Earth, and the complex interactions and stability of these structures drives many biochemical interactions.
The Root-mean-square fluctuation, or how much the atoms fluctuated about their average position, of the proteins in Titan-like conditions was lower on average than that of the proteins on Earth, indicating less variability in structure in these conditions. Additionally, for one of the proteins, rather than not stabilizing into a specific secondary structure like on Earth, it instead settled into a pi helix conformation, a secondary structure that’s uncommon on Earth, as it is less stable [1]. Due to this lack of stability compared to alpha helices, they are typically found near functional sites.These varying secondary structures of proteins affect their ability to interact with other molecules and enzymes in complex ways, something that in the case of pi helices is less explored given their relative rarity on Earth.
These results show that while beta-sheets show similar behavior and presence in Titan-like conditions as they do on Earth, there’s also a tendency towards less common conformations (pi helices). These results expose both this variation of protein conformation and shape in differing conditions, and the survivability of proteins in non-Earth environments. This shows the possibility of discovering life in forms we are unfamiliar with, while also proving proteins, a vital component of life, are capable of existing in extraterrestrial environments. This research helps prove that those planets deemed habitable really are such, and the further study of the specific conformations and interactions of these proteins could provide us with more specific knowledge of what we might identify elsewhere. While this research is an early exploration of potential conditions on Titan and potentially other bodies with subsurface oceans, it still opens the door for further studies of environmental effects on known life, thus expanding our understanding of the potential for life to exist elsewhere.
Sources
- Martin, Kyle P., Shannon M. Mackenzie, Jason W. Barnes, and F. Marty Ytreberg. “Protein Stability in Titans Subsurface Water Ocean.” Astrobiology 20, no. 2 (January 2020): 190–98. https://doi.org/10.1089/ast.2018.1972.
- Abrevaya, Ximena C., Rika Anderson, Giada Arney, Dimitra Atri, Armando Azúa-Bustos, Jeff S. Bowman, William J. Brazelton, et al. “The Astrobiology Primer v2.0.” Astrobiology 16, no. 8 (January 2016): 561–653. https://doi.org/10.1089/ast.2015.1460.
Where the Bison Roam and the Dung Beetles Roll: How American Bison, Dung Beetles, and Prescribed Fires are Bringing Grasslands Back
By John Liu, Wildlife, Fish, and Conservation Biology ‘21
Author’s Note: In this article, I will explore the overwhelming impact that the teeny tiny dung beetles have on American grasslands. Dung beetles, along with reintroduced bison and prescribed fires, are stomping, rolling, and burning through the landscape; all in efforts to revive destroyed grassland habitats. Barber et. al. looks at how the beetles are reacting to the bison herds and prescribed fires. Seemingly unrelated factors interact with each other closely, producing results that bring hope to one of the most threatened habitats.
Watching grass grow: its lit
Grasslands are quiet from afar, often characterized by windblown tallgrasses and peaking prairie dogs. But in fact, they are dynamic. Historically, grasslands were constantly changing: fires ripping through the landscape, bison stampedes kicking up dust, and grasses changing colors by the season [2]. However, climate change, increasing human populations, and agricultural conversions all contribute to an increasing loss of critical habitats; grasslands being amongst the most affected [7]. A loss of grasslands not only results in the extermination of previously residing fauna, but also a reduction of ecosystem services that they once provided. Thus, it is of increasing concern to restore grassland habitats. With the help of bison, dung beetles, and prescribed fires, recovery of grasslands is promising and likely swift.
Eat, poop, burn, repeat
As previously mentioned, grasslands thrive when continuously disturbed. The constant disturbance keeps woody vegetation from encroaching, nonnative plants from invading, and biodiversity from declining as a result of competitive exclusion between species [12]. To accomplish this, grasslands rely on large herbivore grazers such as American bison (Bison bison) to rip through the vegetation and fires to clear large areas of dry debris [9].
American bison are herbivore grazers- animals that feed on plant matter near the ground. The presence of these grazers alter available plant biomass, vegetation community structures, and soil conditions. This is the result of constant trampling, consuming, and digesting of the plant matter [9, 11]. Due to their valuable impact on the landscape, bison are considered keystone species- species that have an overwhelming, essential role in the success of an ecosystem [8]. Grasslands would look vastly different without bison walking, eating, and defecating on them [9].
But bison do not aimlessly roam the grasslands, eating anything they come across. They specifically target areas that have been recently burned. These scorched areas present themselves with new growth, higher in nutritional content [3, 5]. Historically, lightning strikes or intense summer heats caused these fires, driving the movement of grazers, but human intervention inhibits these natural occurrences. Instead, prescribed fires- planned, controlled burnings performed by humans- now mitigate the loss of natural fires, encouraging the bison’s selective foraging behaviors [4, 12]. Inciting bison to follow burned patches benefits the grasslands in more ways than one. First, this prevents overgrazing of any one particular area. By moving throughout the landscape, particular areas will reestablish while others are cleared by the bison. Second, the simple act of traversing large distances physically changes the landscape. Bison are large animals that travel in herds. When moving about the grasslands, they trample vegetation and compact the soil beneath their hoofs. Finally, grazing bison interrupt the process of competitive exclusion- limiting success as a result of competition for resources- amongst native plants. They indiscriminately consume vegetation in these areas, leaving little room for any one species of plant to out compete another [9].
The world is your toilet…. with dung beetles!
What goes in must come out, and bison are no exception to that rule. After digestion of the grasses they eat, bison leave behind a trail of dung and urine. The nitrogen rich waste feeds back into the ecosystem, offering valuable nutrients to the plants and soil-dwelling organisms alike [1]. But a recent study by Barber et. al. highlights a small, but critical component that ensures nutrient distribution is maximized in grasslands: the dung beetles (Scarabaeidae: Scarabaeinae and Aphodiinae, and Geotrupidae).
Dung beetles rely on the solid waste from their mammalian partners. The beetles eat, distribute, and even bury the dung; which helps with carbon sequestration [10]. They are found around the world- from the rainforests of Borneo to the grasslands of North America- and interact with each environment differently. In Borneo, dung beetles distribute seeds found in the waste of fruit loving Howler monkeys (Alouatta spps) [6]. While in North America, they spread nutrients found in the waste of grazing bison. They provide unique ecosystem functions- shattering of nutrient rich dung throughout vast landscapes. These attributes led to their increasing popularity in science research as a study taxon in recent years.
Figure 1: Grassland health is largely dependent on the interplay of multiple living and non-living elements. In 1.1, the area is dominated by woody vegetation and few grasses due to a lack of disturbance. In 1.2, the introduction of prescribed fires clears some woody vegetation, allowing grasses to compete. In 1.3, bison introduce nutrients into the landscape, increasing productivity. However, the distribution of dung is limited. In 1.4, the addition of dung beetles lead to better distribution of nutrients thus more productivity and species diversity.
Barber et. al. took a closer look to see how exactly dung beetles were reacting to bison grazing and prescribed fires blazing through their grassy fields. They found significant contributions from each; both noticeably directing the movement and influencing the abundance of these beetles. As the bison followed the flames, so did the beetles. The beetles’ dependence on the bison’s dung showed when researchers looked at beetle abundance in two key areas: those with bison and those without. There were significantly more beetles in areas with bison, likely feeding on their dung, scattering it, and burying it; all while simultaneously feeding the landscape. Prescribed fires also lead to increases in beetle abundance. Whether it be 1.5 years post-restoration or 30 years post-restoration, researchers consistently saw increases in beetle abundance when prescribed fires were performed. This further amplifies the importance of disturbances in grassland habitats, for ecosystem health but also for species richness.
And the grass keeps growing
The reintroduction of bison in the grasslands of America proved successful in rebuilding a lost habitat, with the help of dung beetles and prescribed fires. However, bison and dung beetles are just one of many examples of unlikely pairings rebuilding lost habitats. Although the large-scale ecological processes have been widely studied, species-to-species interactions are often overlooked. Continued surveys of the grasslands will reveal more about the interactions of contributing factors and their effects on each other and the habitat around them.
Citations
- Barber, Nicholas A., et al. “Initial Responses of Dung Beetle Communities to Bison Reintroduction in Restored and Remnant Tallgrass Prairie.” Natural Areas Journal, vol. 39, no. 4, 2019, p. 420., doi:10.3375/043.039.0405.
- Collins, Scott L., and Linda L. Wallace. Fire in North American Tallgrass Prairies. University of Oklahoma Press, 1990.
- Coppedge, B.R., and J.H. Shaw. 1998. Bison grazing patterns on seasonally burned tallgrass prairie. Journal of Range Management 51:258-264.
- Fuhlendorf, S.D., and D.M. Engle. 2004. Application of the fire–grazing interaction to restore a shifting mosaic on tallgrass prairie. Journal of Applied Ecology 41:604-614.
- Fuhlendorf, S.D., D.M. Engle, J.A.Y. Kerby, and R. Hamilton. 2009. Pyric herbivory: Rewilding landscapes through the recoupling of fire and grazing. Conservation Biology 23:588-598.
- Genes, L. , Fernandez, F. A., Vaz‐de‐Mello, F. Z., da Rosa, P. , Fernandez, E. and Pires, A. S. (2018), Effects of howler monkey reintroduction on ecological interactions and processes. Conservation Biology. . doi:10.1111/cobi.13188
- Gibson, D.J. 2009. Grasses and Grassland Ecology. Oxford University Press, Oxford, UK.
- Khanina, Larisa. “Determining Keystone Species.” Ecology and Society, The Resilience Alliance, 15 Dec. 1998, www.ecologyandsociety.org/vol2/iss2/resp2/.
- Knapp, Alan K., et al. “The Keystone Role of Bison in North American Tallgrass Prairie: Bison Increase Habitat Heterogeneity and Alter a Broad Array of Plant, Community, and Ecosystem Processes.” BioScience, vol. 49, no. 1, 1999,
- Menendez, R., P. Webb, and K.H. Orwin. 2016. Complementarity of ´ dung beetle species with different functional behaviours influence dung–soil carbon cycling. Soil Biology and Biochemistry 92:142-148
- Mcmillan, Brock R., et al. “Vegetation Responses to an Animal-Generated Disturbance (Bison Wallows) in Tallgrass Prairie.” The American Midland Naturalist, vol. 165, no. 1, 2011, pp. 60–73., doi:10.1674/0003-0031-165.1.60.
- Packard, S., and C.F. Mutel. 2005. The Tallgrass Restoration Handbook: For Prairies, Savannas, and Woodlands. Island Press, Washington, DC.
- Raine, Elizabeth H., and Eleanor M. Slade. “Dung Beetle–Mammal Associations: Methods, Research Trends and Future Directions.” Proceedings of the Royal Society B: Biological Sciences, vol. 286, no. 1897, 2019, p. 20182002., doi:10.1098/rspb.2018.2002.
Pharmacogenomics in Personalized Medicine: How Medicine Can Be Tailored To Your Genes
By: Anushka Gupta, Genetics and Genomics, ‘20
Author’s Note: Modern medicine relies on technologies that have barely changed over the past 50 years, despite all of the research that has been conducted on new drugs and therapies. Although medications save millions of lives every year, any one of these might not work for one person even if it works for someone else. With this paper, I hope to shed light on this new rising field and the lasting effects it can have on the human population.
Future of Modern Medicine
Take the following scenario: You’re experiencing a persistent cough, a loss of appetite, and unexplained weight loss to only then find an egg-like swelling under your arm. Today, a doctor would determine your diagnosis by taking a biopsy of your arm and analyzing the cells using the microscope, a 400-year-old technology. You have non-Hodgkins lymphoma. Today’s treatment plan for this condition is a generic one-size-fits-all chemotherapy with some combination of alkylating agents, anti-metabolites, and corticosteroids (just to name a few) that would be injected intravenously to target fast-dividing cells that can harm both cancer cells and healthy cells [1]. This approach may be effective, but if it doesn’t work, your doctor tells you not to despair – there are some other possible drug combinations that might be able to save you.
Flash forward to the future. Your doctor will now instead scan your arm with a DNA array, a computer chip-like device that can register the activity patterns of thousands of different genes in your cells. It will then tell you that your case of lymphoma is actually one of six distinguishable types of T-cell cancer, each of which is known to respond best to different drugs. Your doctor will then use a SNP chip to flag medicines that won’t work in your case since your liver enzymes break them down too fast.
Tailoring Treatment to the Individual
The latter case is one that we all wish to encounter if we were in this scenario. Luckily, this may be the case one day with the implementation of pharmacogenomics in personalized medicine. This new field takes advantage of the fact that new medications typically require extensive trials and testing to ensure its safety, thus holding the potential as a new solution to bypass the traditional testing process of pharmaceuticals.
Even though only the average response is reported, if the drug is shown to have adverse side effects to any fraction of the population, the drug is immediately rejected. “Many drugs fail in clinical trials because they turn out to be toxic to just 1% or 2% of the population,” says Mark Levin, CEO of Millennium Pharmaceuticals [2]. With genotyping, drug companies will be able to identify specific gene variants underlying severe side effects, allowing the occasional toxic reports to be accepted, as gene tests will determine who should and shouldn’t get them. Such pharmacogenomic advances will more than double the FDA approval rate of drugs that can reach the clinic. In the past, fast-tracking was only reserved for medications that were to treat untreatable illnesses. However, pharmacogenomics allows for medications to undergo an expedited process, regardless of the severity of the disease. There would be fewer guidelines to follow because the entire population would not need to produce a desirable outcome. As long as the cause of the adverse reaction can be attributed to a specific genetic variant, the drug will be approved by the FDA [3].
Certain treatments already exist using this current model, such as for those who are afflicted with a certain genetic variant of cystic fibrosis. Additionally, this will contribute to reducing the number of yearly cases of adverse drug reactions. As with any field, pharmacogenomics is still a rising field and is not without its challenges, but new research is still being conducted to test its viability.
With pharmacogenomic informed personalized medicine, individualized treatment can be designed according to one’s genomic profile to predict the clinical outcome of different treatments in different patients [4]. Normally, drugs would be tested on a large population, where the average response would be reported. While this method of medicine relies on the law of averages, personalized medicine, on the other hand, recognizes that no two patients are alike [5].
Genetic Variants
By doubling the approval rate, there will be a larger variety of drugs available to patients with unique circumstances where the generic treatment fails. In pharmacogenomics, genomic information is used to study individual responses to drugs. Experiments can be designed to determine the correlation between particular gene variants with exact drug responses. Specifically, modern approaches, including multigene analysis or whole-genome single nucleotide polymorphism (SNP) profiles, will assist in clinical trials for drug discovery and development [5]. SNPs are especially useful as they are genetically unique to each individual and are responsible for many variable characteristics, such as appearance and personality. A strong grasp of SNPs is fundamental to understand why an individual may have a specific reaction to a drug. Furthermore, SNPs can also be applied so that these genetic markers can be mapped to certain drug responses.
Research regarding specific genetic variants and their association with a varying drug response will be fundamental in prescribing a drug to a patient. The design and implementation of personalized medical therapy will not only improve the outcome of treatments but also reduce the risk of toxicity and other adverse effects. A better understanding of individual variations and their effect on drug response, metabolism excretion, and toxicity has the potential to replace the trial-and-error approach of treatment. Evidence of the clinical utility of pharmacogenetic testing is only available for a few medications, and the Food and Drug Administration (FDA) labels only require pharmacogenetics testing for a small number of drugs [6].
Cystic Fibrosis: Case Study
While this concept may seem far-fetched, a few select treatments have been approved by the FDA for certain populations, as this field of study promotes the development of targeted therapies. For example, the drug Ivacaftor was approved for patients with cystic fibrosis (CF), a genetic disease that causes persistent lung infections and limits the ability to breathe. Those diagnosed with CF have a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene, rendering the resulting CFTR protein defective. This protein is responsible for moving chloride to the cell surface, attracting water that will then generate mucus. However, those with the mutation have thick and sticky mucus, leaving the patient susceptible to germs and other infections as the bacteria that would normally be cleared [7]. Ivacaftor is only approved for CF patients who bear the specific G551D genetic variant, a specific mutation in the CFTR gene. This drug can then target the CFTR protein, increase its activity, and consequently improve lung function [8]. It’s important to note that the G551D is only just one genetic variant out of 1,700 currently known mutations that can cause CF.
Adverse Drug Reactions
Pharmacogenomics also addresses the unknown adverse effects of drugs, especially for medications that are taken too often or too long. These adverse drug reactions (ADRs) are estimated to cost $136 billion annually. Additionally, within the United States itself, serious side effects from pharmaceutical drugs occur in 2 million people each year and may cause as many as 100,000 deaths, making it the fourth most common cause of death according to the FDA [9].
The mysterious and unpredictable side effects of various drugs have been chalked up to individual variation encoded in the genome and not drug dosage. Genetics also determines hypersensitivity reactions in patients who may be allergic to certain drugs. In these cases, the body will initiate a rapid and aggressive immune response that can hinder breathing and may even lead to a cardiovascular collapse [5]. This is just one of the countless cases where unknown patient hypersensitivity to drugs can lead to extreme outcomes. However, some new research in pharmacogenomics has shown that 80% of the variability in drugs can be reduced. The implications of this new research could mean that a significant amount of these ADRs could be significantly decreased inpatient management, leading to better outcomes [11].
Challenges
Pharmacogenomic informed medicine may suggest the ultimate demise of the traditional model of drug development, but the concept of targeted therapy is still in its early stages. One reason that this may be the case is due to the fact that most pharmacogenetic traits involve more than one gene, making it even more difficult to understand or even predict the different variations of a complex phenotype like a drug response. Through genome-wide approaches, there is evidence of drugs having multiple targets and numerous off-target results [4].
Even though this is a promising field, there are challenges that must be overcome. There is a large gap between integrating the primary care workforce with genomic information for various diseases and conditions as many healthcare workers are not prepared to integrate genomics into their daily practice. Medical school curriculums would need to be updated in order to implement information and knowledge regarding pharmacogenomics incorporated personalized medicine. This would also create a barrier in presenting this new research to broader audiences including medical personnel due to the complexity of the field and its inherently interdisciplinary nature [12].
Conclusion
The field has made important strides over the past decade, but clinical trials are still needed to not only identify the various links between genes and treatment outcome, but also to clarify the meaning of these associations and translate them into prescribing guidelines [4]. Despite its potential, there are not many examples where pharmacogenomics impacts clinical utility, especially since many genetic variants have not been studied yet. Nonetheless, progress in the field gives us a glimpse of a time where pharmacogenomics and personalized medicine will be a part of regular patient care.
Sources
- “Chemotherapy for Non-Hodgkin Lymphoma.” American Cancer Society, www.cancer.org/cancer/non-hodgkin-lymphoma/treating/chemotherapy.html.
- Greek, Jean Swingle., and C. Ray. Greek. What Will We Do If We Don’t Experiment on Animals?: Medical Research for the Twenty-First Century. Trafford, 2004, Google Books, books.google.com/books?id=mB3t1MTpZLUC&pg=PA153&lpg=PA153&dq=mark+levin+drugs+fail+in+clinical+trials&source=bl&ots=ugdZPtcAFU&sig=ACfU3U12d-BQF1v67T3WCK8-J4SZS9aMPg&hl=en&sa=X&ved=2ahUKEwjVn6KfypboAhUDM6wKHWw1BrQQ6AEwBXoECAkQAQ#v=onepage&q=mark%20levin%20drugs%20fail%20in%20clinical%20trials&f=false.
- Chary, Krishnan Vengadaraga. “Expedited Drug Review Process: Fast, but Flawed.” Journal of Pharmacology & Pharmacotherapeutics, Medknow Publications & Media Pvt Ltd, 2016, www.ncbi.nlm.nih.gov/pmc/articles/PMC4936080/.
- Schwab, M., Schaeffeler, E. Pharmacogenomics: a key component of personalized therapy. Genome Med 4, 93 (2012). https://doi.org/10.1186/gm394
- Adams, J. (2008) Pharmacogenomics and personalized medicine. Nature Education 1(1):194
- Singh D.B. (2019) The Impact of Pharmacogenomics in Personalized Medicine. In: Silva A., Moreira J., Lobo J., Almeida H. (eds) Current Applications of Pharmaceutical Biotechnology. Advances in Biochemical Engineering/Biotechnology, vol 171. Springer, Cham
- “About Cystic Fibrosis.” CF Foundation, www.cff.org/What-is-CF/About-Cystic-Fibrosis/.
- Eckford PD, Li C, Ramjeesingh M, Bear CE: CFTR potentiator VX-770 (ivacaftor) opens the defective channel gate of mutant CFTR in a phosphorylation-dependent but ATP-independent manner. J Biol Chem. 2012, 287: 36639-36649. 10.1074/jbc.M112.393637.
- Pirmohamed, Munir, and B.kevin Park. “Genetic Susceptibility to Adverse Drug Reactions.” Trends in Pharmacological Sciences, vol. 22, no. 6, 2001, pp. 298–305., doi:10.1016/s0165-6147(00)01717-x.
- Adams, J. (2008) Pharmacogenomics and personalized medicine. Nature Education 1(1):194
- Cacabelos, Ramón, et al. “The Role of Pharmacogenomics in Adverse Drug Reactions.” Expert Review of Clinical Pharmacology, U.S. National Library of Medicine, May 2019, www.ncbi.nlm.nih.gov/pubmed/30916581.
- Roden, Dan M, et al. “Pharmacogenomics: Challenges and Opportunities.” Annals of Internal Medicine, U.S. National Library of Medicine, 21 Nov. 2006, www.ncbi.nlm.nih.gov/pmc/articles/PMC5006954/#idm140518217413328title.
CRISPR Conundrum: Pursuing Consensus on Human Germline Editing
By Daniel Erenstein, Neurobiology, Physiology, and Behavior, ‘21
Author’s Note: In November 2018, a scientist in China became the first person to claim that they had edited the genes of human embryos carried to term. Two twins, named with the pseudonyms Lulu and Nana, were born from these very controversial experiments. This news rapidly propelled the debate on human germline genome editing into the mainstream. My interest in this issue was inspired by my involvement with the Innovative Genomics Institute, located in Berkeley, CA. While attending Berkeley City College during the spring and fall semesters of 2019, I participated in the institute’s CRISPR journal club for undergraduates. Each week, we discussed the latest research from the field of CRISPR gene editing. I also took part in a conference, attended by leading geneticists, bioethicists, philosophers, professors of law and policy, science journalists, and other stakeholders, examining where the consensus, if any, lies on human germline genome editing. Discussions from this conference serve as a foundation for this submission to The Aggie Transcript.
New details have emerged in the ongoing controversy that kicked off in November of 2018 when a Chinese biophysicist claimed that, during in vitro fertilization, he had genetically edited two embryos that were later implanted into their mother. Twins, anonymously named Lulu and Nana, are believed to have been born as a result of these experiments. This announcement from He Jiankui in a presentation at the Second International Summit on Human Germline Editing in Hong Kong was largely met with swift condemnation from scientists and bioethicists [1, 2].
Late last year, excerpts of the unpublished research report were made public for the first time since He’s announcement, shedding light on his approach to edit resistance to human immunodeficiency virus, or HIV, into human genomes using CRISPR-Cas9 [3]. CRISPR, short for clustered regularly interspaced short palindromic repeats, are specific patterns in bacterial DNA. Normally, a bacterium that has survived an attack by a bacteriophage—a virus that infects bacteria and depends on them in order to reproduce—will catalog bacteriophage DNA by incorporating these viral sequences into their own DNA library. This genetic archive of viral DNA, stored between the palindromic repeats of CRISPR, can be revisited as a reference when the bacterium faces future attacks, aiding in its immune response [4].
To respond effectively, bacteria will transcribe a complementary CRISPR RNA molecule from the existing CRISPR sequence. Using crRNA—short for CRISPR RNA—as a guide, CRISPR-associated proteins play the part of a search engine, scanning the cell for any entering viral DNA that matches the crRNA sequence [5]. There are many subtypes of CRISPR-associated proteins [6], but Cas9 is one such type that acts as an enzyme by catalyzing double-stranded breaks in sequences complementary to the guide [7]. This immune system effectively defends against the DNA of invading bacteriophages, protecting the bacterium from succumbing to the virus [5].
A cell’s built-in mechanisms typically repair any double-stranded breaks in DNA via one of two processes: nonhomologous end-joining (NHEJ) or homology-directed repair (HDR) [8]. During NHEJ, base pairs might be unintentionally inserted or deleted, causing frameshift mutations called indels in the repaired DNA sequence. These mutations significantly affect the structure and function of any protein encoded by the sequence and can result in a completely nonfunctional gene product. NHEJ is frequently relied upon by gene editing researchers to “knock out” or inactivate certain genes. HDR is less efficient, but the process is often exploited by scientists to “knock in” genes or substitute DNA [9].
CRISPR is programmable, meaning that certain DNA sequences can be easily added to these sites, precisely altering the cell’s genetic code at specific locations. Jiankui He was not the first to use CRISPR to edit the genes of human embryos, but no one was known to have ever performed these experiments on viable embryos intended for a pregnancy. He and two of his colleagues have since been fined and sentenced to prison for falsifying ethical review documents and misinforming doctors, the state-run Chinese news agency Xinhua reported in December 2019 [10]. But He’s experiments supposedly yielded another birth during the second half of 2019 [11], confirmed by China in January [12], and Russian scientist Denis Rebrikov has since expressed strong interest in moving forward with human germline genome editing to explore a potential cure for deafness [13].
Despite what seems like overwhelming opposition to human germline genome editing, He’s work has even generated interest from self-described biohackers like Josiah Zayner, CEO of The ODIN, a company which produces do-it-yourself genetic engineering kits for use at home and in the classroom.
“As long as the children He Jiankui engineered haven’t been harmed by the experiment, he is just a scientist who forged some documents to convince medical doctors to implant gene-edited embryos,” said Zayner in a STAT opinion reacting to news of He’s sentence [14]. “The 4-minute mile of human genetic engineering has been broken. It will happen again.”
Concerns abound, though, about the use of this technology to cure human diseases. And against the chilling backdrop of a global COVID-19 pandemic, fears run especially high about bad actors using CRISPR gene editing with malicious intent.
“A scientist or biohacker with basic lab know-how could conceivably buy DNA sequences and, using CRISPR, edit them to make an even more panic-inducing bacteria or virus,” said Neal Bear, a television producer and global health lecturer at Harvard Medical School, in a recent STAT opinion [15]. “What’s to stop a rogue scientist from using CRISPR to conjure up an even deadlier version of Ebola or a more transmissible SARS?”
Into the unknown: understanding off-target effects
In his initial presentation, He said that he had targeted the C-C chemokine receptor type 5 (CCR5) gene, which codes for a receptor on white blood cells recognized by HIV during infection. His presentation suggested that gene editing introduced a known mutation named CCR5Δ32 that changes the receptor enough to at least partially inhibit recognition by HIV. The babies’ father was a carrier of HIV, so this editing was performed to supposedly protect the twins from future HIV infection [16].
He’s edits to the CCR5 gene—and human germline genome editing, in general—worry geneticists because the off-target effects of introducing artificial changes into the human gene pool are largely unknown. In a video posted on his lab’s YouTube channel [17], He claimed that follow-up sequencing of the twins’ genomes confirmed that “no gene was changed except the one to prevent HIV infection.”
Excerpts from the unpublished study indicate otherwise, according to an expert asked to comment on He’s research in MIT Technology Review, because any cells taken from the twins to run these sequencing tests were no longer part of the developing embryos [3].
“It is technically impossible to determine whether an edited embryo ‘did not show any off-target mutations’ without destroying that embryo by inspecting every one of its cells,” said Fyodor Urnov, professor of molecular and cell biology at UC Berkeley and gene-editing specialist [3]. “This is a key problem for the entirety of the embryo-editing field, one that the authors sweep under the rug here.”
Urnov’s comments raise concerns about “mosaicism” in the cells of Lulu and Nana—and any other future babies brought to term after germline genome editing during embryonic stages of development. In his experiments, He used preimplantation genetic diagnosis to verify gene editing. Even if the cells tested through this technique showed the intended mutation, though, there is a significant risk that the remaining cells in the embryo were left unedited or that unknown mutations with unforeseeable consequences were introduced [16].
While the CCR5Δ32 mutation has, indeed, been found to be associated with HIV resistance [18, 19], even individuals with both copies of CCR5Δ32 can still be infected with certain strains of HIV [20]. In addition, the CCR5Δ32 mutation is found almost exclusively in certain European populations and in very low frequencies elsewhere, including China [21, 22], amplifying the uncertain risk of introducing this particular mutation into Chinese individuals and the broader Chinese gene pool [16].
Perhaps most shocking to the scientific community is the revelation that He’s experiment did not actually edit the CCR5 gene as intended. In He’s November 2018 presentation, he discussed the rates of mutation via non-homologous end-joining but made no mention of the other repair mechanism, homology-directed repair, which would be used to “knock in” the intended mutation. This “[suggests] that He had no intention of generating the CCR5Δ32 allele,” wrote Haoyi Wang and Hui Yang in a PLoS Biology paper on He’s experiments [16].
Gauging the necessity of germline genome editing
The potential of CRISPR to revolutionize how we treat diseases like cystic fibrosis, sickle cell disease, and muscular dystrophy is frequently discussed in the news; just recently, clinical trials involving a gene-editing treatment for Leber congenital amaurosis, a rare genetic eye disorder, stirred enthusiasm, becoming the first treatment to directly edit DNA while it’s still in the body [23]. While this treatment edits somatic cells—cells that are not passed onto future generations during reproduction—there is increasing demand for the use of germline genome editing as well, even despite the reservations of scientists and bioethicists.
This begs the question: how will society decide what types of genetic modifications are needed? In the case of He’s experiments, most agree that germline genome editing was an unnecessary strategy to protect against HIV. Assisted reproductive technology (ART), a technique that features washing the father’s sperm of excess seminal fluids before in vitro fertilization (IVF), was used in He’s experiments [3] and has already been established as an effective defense against HIV transmission [24]. Appropriately handling gametes—another word for sperm and egg cells—during IVF is an additional method used to protect the embryo from viral transmission, according to Jeanne O’Brien, a reproductive endocrinologist at the Shady Grove Fertility Center [3].
“As for considering future immunity to HIV infection, simply avoiding potential risk of HIV exposure suffices for most people,” wrote Wang and Yang in their PLoS Biology paper [16]. “Therefore, editing early embryos does not provide benefits for the babies, while posing potentially serious risks on multiple fronts.”
One such unintended risk of He’s experiments might be increased susceptibility to West Nile virus, an infection thought to be prevented by unmutated copies of the CCR5 receptor [11].
In a paper that examines the societal and ethical impacts of human germline genome editing, published last year in The CRISPR Journal [25], authors Jodi Halpern, Sharon O’Hara, Kevin Doxzen, Lea Witkowsky, and Aleksa Owen add that “this mutation may increase vulnerability to other infections such as influenza, creating an undue burden on these offspring, [so] we would opt instead for safer ways to prevent HIV infection.”
The authors go on to propose the implementation of a Human Rights Impact Assessment. This assessment would evaluate germline editing treatments or policies using questions that weigh the benefits of an intervention against its possible risks or its potential to generate discrimination. The ultimate goal of such an assessment would be to “establish robust regulatory frameworks necessary for the global protection of human rights” [25].
Most acknowledge that there are several questions to answer before human germline genome editing should proceed: Should we do it? Which applications of the technology are ethical? How can we govern human germline genome editing? Who has the privilege of making these decisions?
Evaluating consensus on germline genome editing
In late October of last year, scientists, bioethicists, policymakers, patient advocates, and religious leaders gathered with members of the public in Berkeley for a discussion centered around some of these unanswered questions. One of the pioneers of CRISPR gene editing technologies, Jennifer Doudna, is a professor of biochemistry and molecular biology at UC Berkeley, and the Innovative Genomics Institute, which houses Doudna’s lab, organized this CRISPR Consensus? conference in collaboration with the Initiative on Science, Technology, and Human Identity at Arizona State University and the Keystone Policy Center.
The goal of the conference was to generate conversation about where the consensus, if any, lies on human germline genome editing. One of the conference organizers, J. Benjamin Hurlbut, emphasized the role that bioethics—the study of ethical, social, and legal issues caused by biomedical technologies—should play in considerations of germline genome editing.
He’s “aim was apparently to race ahead of his scientific competitors but also to reshape and speed up, as he put it, the ethical debate. But speed is surely not what we need in this case,” said Hurlbut, associate professor of biology and society at Arizona State University, at the conference [26].
Central to the debate surrounding consensus is the issue of stakeholders in decision-making about germline genome editing. Experts seem to be divided in their definitions of a stakeholder, with varying opinions about the communities that should be included in governance. They do agree, however, that these discussions are paramount to ensure beneficence and justice, tenets of bioethical thought, for those involved.
An underlying reason for these concerns is that, should human germline genome editing become widely available in the future, the cost of these therapies might restrict access to certain privileged populations.
“I don’t think it’s far-fetched to say that there’s institutionalized racism that goes on around access to this technology, the democratization and self-governance of it,” said Keolu Fox, a UC San Diego scholar who studies the anthropology of natural selection from a genomics perspective. Fox focused his discussion on indigenous populations when addressing the issue of autonomy in governance of germline genome editing [26].
“If we don’t put indigenous people or vulnerable populations in the driver’s seat so that they can really think about the potential applications of this type of technology, self-governance, and how to create intellectual property that has a circular economy that goes back to their community,” Fox said, “that is continued colonialism in 2020.”
Indeed, marginalized communities have experienced the evil that genetics can be used to justify, and millions of lives have been lost throughout human history to ideologies emphasizing genetic purity like eugenics and Nazism.
“We know that history with genetics is wrought with a lot of wrongdoings and also good intentions that can go wrong, and so there’s a community distrust [of germline editing],” said Billie Liangolou, a UC San Francisco (UCSF) Benioff Children’s Hospital genetic counselor, during a panel on stakeholders that included Fox. Liangolou works with expecting mothers, guiding them through the challenges associated with difficult genetic diagnoses during pregnancy [26].
Others agree that the communities affected most by human germline genome editing should be at the forefront of decision-making about this emerging technology. Sharon Begley, a senior science writer at STAT News, told the conference audience that a mother with a genetic disease once asked her if she could “just change my little drop of the human gene pool so that my children don’t have this terrible thing that I have” [26].
This question, frequently echoed throughout society by other prospective parents, reflects the present-day interest in human germline genome editing technologies, interest that will likely continue to grow as further research on human embryos continues.
In an opinion published by STAT News, Ethan Weiss, a cardiologist and associate professor of medicine at UCSF, acknowledges the concerns of parents faced with these decisions [27]. His daughter, Ruthie, has oculocutaneous albinism, a rare genetic disorder characterized by mutations in the OCA2 gene, which is involved in producing melanin. Necessary for normally functioning vision, melanin is a pigment found in the eyes [28].
Weiss and his partner “believe that had we learned our unborn child had oculocutaneous albinism, Ruthie would not be here today. She would have been filtered out as an embryo or terminated,” he said.
But, in the end, Weiss offers up a cautionary message to readers, encouraging people to “think hard” about the potential effects of human germline genome editing.
“We know that Ruthie’s presence in this world makes it a better, kinder, more considerate, more patient, and more humane place,” Weiss said. “It is not hard, then, to see that these new technologies bring risk that the world will be less kind, less compassionate, and less patient when there are fewer children like Ruthie. And the kids who inevitably end up with oculocutaneous albinism or other rare diseases will be even less ‘normal’ than they are today.”
Weiss’ warning is underscored by disability rights scholars who say that treating genetic disorders with CRISPR or other germline editing technologies could lead to heightened focus on those who continue to live with these disabilities. In an interview with Katie Hasson of the Center for Genetics and Society, located in Berkeley, Jackie Leach Scully commented on the stigmatization that disabled people might face in a world where germline editing is regularly practiced [29].
“Since only a minority of disability is genetic, even if genome editing eventually becomes a safe and routine technology it won’t eradicate disability,” said Scully, professor of bioethics at the University of New South Wales in Australia. “The concern then would be about the social effects of [heritable genome editing] for people with non-genetic disabilities, and the context that such changes would create for them.”
Others worry about how to define the boundary between the prevention of genetic diseases and the enhancement of desirable traits—and what this means for the decisions a germline editing governing body would have to make about people’s value in society. Emily Beitiks, associate director of the Paul K. Longmore Institute on Disability at San Francisco State University, is among the community of experts who have raised such concerns [30].
“Knowing that these choices are being made in a deeply ableist culture,” said Beitiks in an article posted on the Center for Genetics and Society’s blog [30], “illustrates how hard it would be to draw lines about what genetic diseases ‘we’ agree to engineer out of the gene pool and which are allowed to stay.”
Religious leaders have also weighed in on the ethics of human germline genome editing. Father Joseph Tham, who has previously published work on what he calls “the secularization of bioethics,” presented his views on the role of religion in this debate about bioethics at the conference [26].
“Many people in the world belong to some kind of religious tradition, and I think it would be a shame if religion is not a part of this conversation,” said Tham, professor at Regina Apostolorum Pontifical University’s School of Bioethics.
Tham explained that the church already disapproves of IVF techniques, let alone human germline editing, “because in some way it deforms the whole sense of the human sexual act.”
Islamic perspectives on germline editing differ. In a paper published last year, Mohammed Ghaly, one of the conference panelists, discussed how the Islamic religious tradition informs perspectives on human genome editing in the Muslim world [31].
“The mainstream position among Muslim scholars is that before embryos are implanted in the uterus, they do not have the moral status of a human being,” said Ghaly, professor of Islam and biomedical ethics at Hamad Bin Khalifa University. “That is why the scholars find it unproblematic to use them for conducting research with the aim of producing beneficial knowledge.”
Where Muslim religious scholars draw the line, Ghaly says, is at the applications of human germline genome editing, not research about it. Issues regarding the safety and effectiveness of germline editing make its current use in viable human embryos largely untenable, according to the majority of religious scholars [31].
The unfolding, back-and-forth debate about who and how to design policies guiding human germline genome editing continues to rage on, but there is little doubt about consensus on one point. For a technology with effects as far-reaching as this one, time is of the essence.
References
- Scientist who claims to have made gene-edited babies speaks in Hong Kong . 27 Nov 2018, 36:04 minutes. Global News; [accessed 2 May 2020]. https://youtu.be/0jILo9y71s0.
- Cyranoski D. 2018. CRISPR-baby scientist fails to satisfy critics. Nature. 564 (7734): 13-14.
- Regalado A. 2019. China’s CRISPR babies: Read exclusive excerpts from the unseen original research. Cambridge (MA): MIT Technology Review; [accessed 2 May 2020]. https://www.technologyreview.com/2019/12/03/131752/chinas-crispr-babies-read-exclusive-excerpts-he-jiankui-paper/.
- Genetic Engineering Will Change Everything Forever – CRISPR . 10 Aug 2016, 16:03 minutes. Kurzgesagt – In a Nutshell; [accessed 2 May 2020]. https://youtu.be/jAhjPd4uNFY.
- Doudna J. Editing the Code of Life: The Future of Genome Editing [lecture]. 21 Feb 2019. Endowed Elberg Series. Berkeley (CA): Institute for International Studies. https://youtu.be/9Yblg9wDHZA.
- Haft DH, Selengut J, Mongodin EF, Nelson KE. 2005. A guild of 45 CRISPR-associated (Cas) protein families and multiple CRISPR/Cas subtypes exist in prokaryotic genomes. PLoS Comput Biol. 1 (6): e60. [about 10 pages].
- Makarova KS, Koonin EV. 2015. Annotation and Classification of CRISPR-Cas Systems. Methods Mol Biol. 1311: 47-75.
- Hsu PD, Lander ES, Zhang F. 2014. Development and applications of CRISPR-Cas9 for genome engineering. Cell. 157 (6): 1262-78.
- Enzmann B. 2019. CRISPR Editing is All About DNA Repair Mechanisms. Redwood City (CA): Synthego; [accessed 2 May 2020]. https://www.synthego.com/blog/crispr-dna-repair-pathways.
- Normile D. 2019. Chinese scientist who produced genetically altered babies sentenced to 3 years in jail. Science; [accessed 2 May 2020]. https://www.sciencemag.org/news/2019/12/chinese-scientist-who-produced-genetically-altered-babies-sentenced-3-years-jail.
- Cyranoski D. 2019. The CRISPR-baby scandal: what’s next for human gene-editing. Nature. 566 (7745): 440-442.
- Osborne H. 2020. China confirms three gene edited babies were born through He Jiankui’s experiments. New York City (NY): Newsweek; [accessed 2 May 2020]. https://www.newsweek.com/china-third-gene-edited-baby-1480020.
- Cohen J. 2019. Embattled Russian scientist sharpens plans to create gene-edited babies. Science; [accessed 2 May 2020]. https://www.sciencemag.org/news/2019/10/embattled-russian-scientist-sharpens-plans-create-gene-edited-babies#.
- Zayner J. 2020. CRISPR babies scientist He Jiankui should not be villainized –– or headed to prison. Boston (MA): STAT News; [accessed 2 May 2020]. https://www.statnews.com/2020/01/02/crispr-babies-scientist-he-jiankui-should-not-be-villainized/.
- Baer N. 2020. Covid-19 is scary. Could a rogue scientist use CRISPR to conjure another pandemic? Boston (MA): STAT News; [accessed 2 May 2020]. https://www.statnews.com/2020/03/26/could-rogue-scientist-use-crispr-create-pandemic/.
- Wang H, Yang H. 2019. Gene-edited babies: What went wrong and what could go wrong. PLoS Biol 17 (4): e3000224. [about 5 pages].
- About Lulu and Nana: Twin Girls Born Healthy After Gene Surgery As Single-Cell Embryos . 25 Nov 2018, 4:43 minutes. The He Lab; [accessed 2 May 2020]. https://youtu.be/th0vnOmFltc.
- Samson M, Libert F, Doranz BJ, Rucker J, Liesnard C, Farber CM, Saragosti S, Lapouméroulie C, Cognaux J, Forceille C, et al. 1996. Resistance to HIV-1 infection in caucasian individuals bearing mutant alleles of the CCR-5 chemokine receptor gene. Nature. 382 (6593): 722–5.
- Marmor M, Sheppard HW, Donnell D, Bozeman S, Celum C, Buchbinder S, Koblin B, Seage GR. 2001. Homozygous and heterozygous CCR5-Delta32 genotypes are associated with resistance to HIV infection. J Acquir Immune Defic Syndr. 27 (5): 472–81.
- Lopalco L. 2010. CCR5: From Natural Resistance to a New Anti-HIV Strategy. Viruses. 2 (2): 574–600.
- Martinson JJ, Chapman NH, Rees DC, Liu YT, Clegg JB. 1997. Global distribution of the CCR5 gene 32-base-pair deletion. Nat Genet. 16 (1): 100–3.
- Zhang C, Fu S, Xue Y, Wang Q, Huang X, Wang B, Liu A, Ma L, Yu Y, Shi R, et al. 2002. Distribution of the CCR5 gene 32-basepair deletion in 11 Chinese populations. Anthropol Anz. 60 (3): 267–71.
- Sofia M. Yep. They Injected CRISPR Into an Eyeball . 19 May 2020, 8:43 minutes. NPR Short Wave; [accessed 2 May 2020]. https://www.npr.org/2020/03/18/yep-they-injected-crispr-into-an-eyeball.
- Zafer M, Horvath H, Mmeje O, van der Poel S, Semprini AE, Rutherford G, Brown J. 2016. Effectiveness of semen washing to prevent human immunodeficiency virus (HIV) transmission and assist pregnancy in HIV-discordant couples: a systematic review and meta-analysis. Fertil Steril. 105 (3): 645–55.
- Halpern J, O’Hara S, Doxzen K, Witkowsky L, Owen A. 2019. Societal and Ethical Impacts of Germline Genome Editing: How Can We Secure Human Rights? The CRISPR Journal. 2 (5): 293-298.
- CRISPR Consensus? Public debate and the future of genome editing in human reproduction [conference]. 26 Oct 2019. Berkeley, CA: Innovative Genomics Institute. https://youtu.be/SFrKjItaWGc.
- Weiss E. 2020. Should ‘broken’ genes be fixed? My daughter changed the way I think about that question. Boston (MA): STAT News; [accessed 2 May 2020]. https://www.statnews.com/2020/02/21/should-broken-genes-be-fixed-my-daughter-changed-the-way-i-think-about-that-question/.
- Grønskov K, Ek J, Brondum-Nielsen K. 2007. Oculocutaneous albinism. Orphanet J Rare Dis. 2 (43). [about 8 pages].
- Hasson K. 2019. Illness or Identity? A Disability Rights Scholar Comments on the Plan to Use CRISPR to Prevent Deafness. Berkeley (CA): Center for Genetics and Society; [accessed 2 May 2020]. https://www.geneticsandsociety.org/biopolitical-times/illness-or-identity-disability-rights-scholar-comments-plan-use-crispr-prevent.
- Beitiks E. 5 Reasons Why We Need People with Disabilities in The CRISPR Debates. San Francisco (CA): Paul K. Longmore Institute on Disability; [accessed 2 May 2020]. https://longmoreinstitute.sfsu.edu/5-reasons-why-we-need-people-disabilities-crispr-debates.
- Ghaly M. 2019. Islamic Ethical Perspectives on Human Genome Editing. Issues in Science and Technology. 35 (3): 45-48.
The Role of Dendritic Spine Density in Neuropsychiatric and Learning Disorders
Photo originally by MethoxyRoxy on Wikimedia Commons. No changes. CC License BY-SA 2.5.
By Neha Madugala, Cognitive Science, ‘21
Author’s Note: Last quarter I took Neurobiology (NPB100) with Karen Zito, a professor at UC Davis. I was interested in her research in dendritic spines and its correlation to my personal area of interest in research regarding the language and cognitive deficiencies present in different populations such as individuals with schizophrenia. There seems to be a correlational link between the generation and quantity of dendritic spines and the presence of different neurological disorders. Given the dynamic nature of dendritic spines, current research is studying their exact role and the potential to manipulate these spines in order to impact learning and memory.
Introduction
Dendritic spines are small bulbous protrusions that line the sides of dendrites on a neuron [12]. Dendritic spines serve as a major site of synapses for excitatory neurons, which continue signal propagation in the brain. Relatively little is known about the exact purpose and role of dendritic spines, but as of now, there seems to be a correlation between the concentration of dendritic spines and the presence of different disorders, such as autism spectrum disorders (ASD), schizophrenia, and Alzheimer’s disease. Scientists hypothesize that dendritic spines are a key player in the pathogenesis of various neuropsychiatric disorders [8]. It should be noted that other morphological changes are also observed when comparing individuals with the mentioned neuropsychiatric disorders are compared to neurotypical individuals. However, all these disorders share the common thread of abnormal dendritic spine density.
The main disorders studied in relation to dendritic spine density are autism spectrum disorder (ASD), schizophrenia, and Alzheimer’s disease. Current studies suggest that these disorders result in the number of dendritic spines straying from what is observed in a neurotypical individual. It should be noted that there is a general decline in dendritic spines as an individual ages. However intellectual disabilities and neuropsychiatric disorders seem to alter this density at a more extreme rate. The graph demonstrates the general trend of dendritic spine density for various disorders; however, these trends may slightly vary across individuals with the same disorder.
Dendritic Spines
I. Role of Dendritic Spines
Dendritic spines are protrusions found on certain types of neurons throughout the brain, such as in the cerebellum and cerebral cortex. They were first identified by Ramon y Cajal, who classified them as “thorns or short spines” located nonuniformly along the dendrite [6].
The entire human cerebral cortex consists of 1014 dendritic spines. A single dendrite can contain several hundred spines [12]. There is an overall greater density of dendritic spines on peripheral dendrites versus proximal dendrites and the cell body [3]. Their main role is to assist in synapse formation on dendrites.
Dendritic Spines fall into two categories: persistent and transient spines. Persistent spines are considered ‘memory’ spines, while transient spines are considered ‘learning’ spines. Transient spines are categorized as spines that exist for four days or less and persistent spines as spines that exist for eight days or longer [5].
The dense concentration of spines on dendrites is crucial to the fundamental nature of dendrites. At an excitatory synaptic cleft, the release of the neurotransmitter at excitatory receptors on the postsynaptic cell results in an excitatory postsynaptic potential (EPSP), which causes the cell to fire an action potential. An action potential is where a signal is transmitted from one neuron to another neuron. In order for a neuron to propagate an action potential, there must be an accumulation of positive charge at the synapses, reaching a certain threshold (Figure 2). The cell must reach a certain level of depolarization – a difference in charge across the neuron’s membrane making the inside more positive. A single EPSP may not result in enough depolarization to reach this action potential threshold. As a result, the presence of multiple dendritic spines on the dendrite allows for multiple synapses to be formed and multiple EPSPs to be summated. With the summation of various EPSPs on the dendrites of the neurons, the cell can reach the action potential threshold. The greater density of dendritic spines along the postsynaptic cell allows for more synaptic connections to be formed, increasing the chance of an action potential to occur.
Figure 2. Firing of Action Potential (EPSP)
- Neurotransmitter is released by the presynaptic cell into the synaptic cleft.
- For an EPSP, an excitatory neurotransmitter will be released, which will bind to receptors on the postsynaptic cell.
- The binding of these excitatory neurotransmitters will result in sodium channels opening, allowing sodium to go down its electrical and chemical gradient – depolarizing the cell.
- The EPSPs will be summated at the axon hillock and trigger an action potential.
- This actional potential will cause the firing cell to release a neurotransmitter at its axon terminal, further conveying the electrical signal to other neurons.
II. Creation
Dendrites initially are formed without spines. As development progresses, the plasma membrane of the dendrite forms protrusions called filopodia. These filopodia then form synapses with axons, and eventually transition from filopodia to dendritic spines [6].
The reason behind the creation of dendritic spines is currently unknown. There are a few potential hypotheses. The first hypothesis suggests that the presence of dendritic spines can increase the packing density of synapses, allowing for more potential synapses to be formed. The second hypothesis suggests that their presence can help prevent excitotoxicity, overexcitation of the excitatory receptors (NMDA and AMPA receptors) present on the dendrites. These receptors usually bind with glutamate, a typically excitatory neurotransmitter, released from the presynaptic cell. This can result in damage to the neuron or if more severe, neuronal death. Since dendritic spines compartmentalize charge [3], this feature helps prevent the dendrite from being over-excited beyond the threshold potential for an action potential. Lastly, another hypothesis suggests that the large variation in dendritic spine morphology suggests that these different shapes play a role in modulating how postsynaptic potentials can be processed by the dendrite based on the function of the signal.
The creation of these dendritic spines is rapid during early development, slowly tapering off as the individual gets older. This process is mostly replaced with the pruning of synapses formed with dendritic spines when the individual is older. Pruning helps improve the signal-to-noise ratio of signals sent within neuronal circuits [3]. The signal-to-noise ratio outlines the ratio of signals sent by neurons and signals actually received by postsynaptic cells. It determines the efficiency of signal transmission. Experimentation has shown that the presence of glutamate and excitatory receptors (such as NMDA and AMPA) can result in the formation of dendritic spines within seconds [3]. The introduction of NMDA and AMPA results in cleavage of intracellular adhesion molecule-5 (ICAM5) from hippocampal neurons. ICAM5 is a “neuronal adhesion molecule that regulates dendritic elongation and spine maturation. [11]” Furthermore, through a combination of fluorescent dye and confocal or two-photon laser scanning microscopy, scientists were able to use imaging technology to witness that spines can undergo minor changes within seconds and more drastic conformational changes, even disappearing over minutes to hours [12].
III. Morphology
The spine head’s morphology, a large bulbous head connected to a very thin neck that attaches to the dendrite, assists in its role as a postsynaptic cell. This shape allows one synapse at a dendritic spine to be activated and strengthened without influencing neighboring synapses [12].
Dendritic spine shape is extremely dynamic, allowing one spine to slightly alter its morphology throughout its lifetime [5]. However, dendritic spine morphology seems to take on a predominant form that is determined by the brain region of its location. For instance, presynaptic neurons from the thalamus take on the mushroom shape, whereas the lateral nucleus of the amygdala have thin spines on their dendrites [2]. The type of neuron and brain region the spine originates from seem to be correlated to the observed morphology.
The spine contains a postsynaptic density, which consists of neurotransmitter receptors, ion channels, scaffolding proteins, and signaling molecules [12]. In addition to this, the spine has smooth endoplasmic reticulum, which forms stacks called spine apparatus. It further has polyribosomes, hypothesized to be the site of local protein synthesis in these spines, and an actin-based cytoskeleton for structure [12]. The actin-based cytoskeleton makes up for the lack of microtubules and intermediate filaments, which play a crucial role in the structure and transport of most of our animal cells. Furthermore, these spines are capable of compartmentalizing calcium, the ion used at neural synapses that signal the presynaptic cell to release its neurotransmitter into the synaptic cleft [12]. Calcium plays a crucial role in second messenger cascades, influencing neural plasticity [6]. It also plays a role in actin polymerization, which allows for the motile nature of spine morphology [6].
There are many various shapes for dendritic spines. The common types are ‘stubby’ (short and thick spines with no neck), ‘thin’ (small head and thin neck), ‘mushroom’ (large head with a constricted neck), and ‘branched’ (two heads branching from the same neck) [12].
IV. Learning and Memory
Dendritic spines play a crucial role in memory and learning through occurrence of long-term potentiation (LTP), which is thought to be the cellular level of learning and memory. LTP is thought to induce spine formation, which hints at the common correlation that learning is associated with the formation of dendritic spines. Furthermore, LTP is thought to be capable of altering the immature and mature hippocampus, commonly associated with memory [2]. To contrast LTP, long-term depression (LTD) essentially works opposite to LTP – decreasing the dendritic spine density and size [2].
The correlation between dendritic spines and learning is relatively unknown. There seems to be a general trend suggesting that the creation of these spines is associated with learning. However, it is unclear whether learning results in the formation of these spines or if the formation of these spines results in learning. The general idea behind this hypothesis is that dendritic spines aid in the formation of synapses, allowing the brain to form more connections. As a result, a decline in these dendritic spines in neuropsychiatric disorders, such as schizophrenia, can inhibit an individual’s ability to learn. This is observed in various cognitive and linguistic deficiencies observed in individuals with schizophrenia.
Memory is associated with the strengthening and weakening of connections due to LTP and LTD, respectively. The alteration of these spines through LTP and LTD is called activity-dependent plasticity [6]. The main morphological shapes associated with memory are the mushroom spine, a large head with a constricted neck, and the stubby spine, a short and thick spine with no neck [6]. Both of these spines are relatively large, resulting in more stable and enduring connections. These bigger and heavier spines associated with learning are a result of LTP. By contrast, transient spines (live four days or shorter) are usually smaller and more immature in morphology and function, resulting in more temporary and less stable connections.
LTP and LTD play a crucial role in modifying dendritic spine morphology. Neuropsychiatric disorders can alter these mechanisms resulting in abnormal density and size of these spines.
Schizophrenia
I. What is Schizophrenia?
Schizophrenia is a mental disorder that results in disordered thinking and behaviors, hallucinations, and delusions [9]. The exact mechanics of schizophrenia are still being studied as researchers are trying to determine the underlying biological reasons behind this disorder and a way to help these individuals. Current treatment is focused on reducing and in some cases treating symptoms of this disorder, but more research and understanding is required to fully treat this mental disorder.
II. Causation
The exact source of schizophrenia seems to lie somewhere between the presence of certain genes and environmental effects. There seems to be a correlation between traumatic or stressful life events during an individual’s adolescence to an increased susceptibility to developing schizophrenia [1]. While research is still underway, certain studies point to cannabis having a role in increasing susceptibility to schizophrenia or worsening symptoms if an individual already has schizophrenia [1]. There seems to be some form of a genetic correlation, given an increased likelihood of developing schizophrenia if present in a family member. This factor seems to result from a combination of genes; however, no genes have been identified yet. There also seems to be a chemical component, given the variation of chemical composition and density of neurotypical individuals and individuals with schizophrenia. Specifically, researchers have observed an elevated amount of dopamine found in individuals with schizophrenia [1].
III. Relationship between Dendritic Spines and Schizophrenia
A common thread among most schizophrenia patients is an impairment of pyramidal neuron (prominent cell form found in the cerebral cortex) dendritic morphology, occurring in various regions of the cerebral cortex [7]. Observed in postmortem brain tissue studies, there seems to be a reduced density of dendritic spines in the brains of individuals with schizophrenia. These findings are consistent with various regions of the brain that have been studied, such as the frontal and temporal neocortex, the primary visual cortex, and the subiculum within the hippocampal formation [7]. Out of seven studies observing this finding, the median reported decrease in spine density was 23%, with the overall range of these various studies being a decline of 6.5% to 66% [7].
It should be noted that studies were done to see if the decline in spine density was due to the usage of antipsychotic drugs. However animal and human trials showed no significant difference in the dendritic spine density of tested individuals.
This decline in dendritic spine density is hypothesized to be the result of the failure of the brain of schizophrenic individuals to produce sufficient dendritic spines at birth or if there is a more rapid decline of these spines during adolescence, where the onset of schizophrenia is typically observed [7]. The source of this decline is unclear, but seems to be attributed to deficits in pruning, maintenance, or simply the mechanisms of the underlying formation of these dendritic spines [7].
However, there are conflicting results. For instance, Thompson et al. conducted a study that seemed to suggest that a decline in spine density resulted in a progressive decline of gray matter, typically observed in schizophrenic individuals. Thompson et al. conducted an in vivo study of this phenomena. The study used MRI scans for twelve schizophrenic individuals and twelve neurotypical individuals, finding a progressive decline in gray matter – starting in the parietal lobe and expanding out to motor, temporal, and prefrontal areas [10]. The study suggests that the main attribution for this is a decline in dendritic spine density with the progression of the disorder. This study coincides with the previously mentioned hypothesis of a decline of spines during adolescence.
It is also possible that there is a combination of both of these factors occurring. Most studies have only been able to observe postmortem brain tissue, creating the confusion of whether there is a decline in spines or if the spines are simply not produced in the first place. The lack of in vivo studies makes it difficult to find a concrete trend within data.
Conclusion
While research is still ongoing, current evidence seems to suggest that dendritic spines are a crucial aspect in learning and memory. Their role in these crucial functions has been reflected by their absence in various neuropsychiatric disorders – such as schizophrenia, certain learning deficits present in some individuals with ASD, and memory deficits present in Alzheimer’s disease. These deficits seem to occur during the early creation of neural networks in the brain at synapses. Further research understanding the development of these spines and the creation of different morphological forms can be crucial in determining how to potentially cure or treat these deficiencies present in neuropsychiatric and learning disorders.
References
- NHS Choices, NHS, www.nhs.uk/conditions/schizophrenia/causes/.
- Bourne, Jennifer N, and Kristen M Harris. “Balancing Structure and Function at Hippocampal Dendritic Spines.” Annual Review of Neuroscience, U.S. National Library of Medicine, 2008, www.ncbi.nlm.nih.gov/pmc/articles/PMC2561948/.
- “Dendritic Spines: Spectrum: Autism Research News.” Spectrum, www.spectrumnews.org/wiki/dendritic-spines/.
- Hofer, Sonja B., and Tobias Bonhoeffer. “Dendritic Spines: The Stuff That Memories Are Made Of?” Current Biology, vol. 20, no. 4, 2010, doi:10.1016/j.cub.2009.12.040.
- Holtmaat, Anthony J.G.D., et al. “Transient and Persistent Dendritic Spines in the Neocortex In Vivo.” Neuron, Cell Press, 19 Jan. 2005, www.sciencedirect.com/science/article/pii/S0896627305000048.
- McCann, Ruth F, and David A Ross. “A Fragile Balance: Dendritic Spines, Learning, and Memory.” Biological Psychiatry, U.S. National Library of Medicine, 15 July 2017, www.ncbi.nlm.nih.gov/pmc/articles/PMC5712843/.
- Moyer, Caitlin E, et al. “Dendritic Spine Alterations in Schizophrenia.” Neuroscience Letters, U.S. National Library of Medicine, 5 Aug. 2015, www.ncbi.nlm.nih.gov/pmc/articles/PMC4454616/.
- Penzes, Peter, et al. “Dendritic Spine Pathology in Neuropsychiatric Disorders.” Nature Neuroscience, U.S. National Library of Medicine, Mar. 2011, www.ncbi.nlm.nih.gov/pmc/articles/PMC3530413/.
- “Schizophrenia.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 7 Jan. 2020, www.mayoclinic.org/diseases-conditions/schizophrenia/symptoms-causes/syc-20354443.
- “Schizophrenia and Dendritic Spines.” Ness Labs, 20 June 2019, nesslabs.com/schizophrenia-dendritic-spines.
- “Synaptic Cleft: Anatomy, Structure, Diseases & Functions.” The Human Memory, 17 Oct. 2019, human-memory.net/synaptic-cleft/.
- Tian, Li, et al. “Activation of NMDA Receptors Promotes Dendritic Spine Development through MMP-Mediated ICAM-5 Cleavage.” The Journal of Cell Biology, Rockefeller University Press|1, 13 Aug. 2007, www.ncbi.nlm.nih.gov/pmc/articles/PMC2064474/.
- Zito, Karen, and Venkatesh N. Murthy. “Dendritic Spines.” Current Biology, vol. 12, no. 1, 2002, doi:10.1016/s0960-9822(01)00636-4.
Einstein’s Fifth Symphony
By Jessie Lau, Biochemistry and Molecular Biology ‘20
Author’s Note: Growing up, playing the piano was a major part of my life— weekdays were filled with hour-long practices while Saturdays were for lessons. My schedule was filled with preparations for board exams and recitals, and in the absence of the black and white keys, my fingers were always tapping away at any surface I could find. My parents always told me learning the piano was good for my education and will put me ahead in school because it will help with my math and critical thinking in the long run. However, I was never able to understand the connection between the ease of reading music and my ability to calculate complex integrals. In this paper, I will extrapolate on the benefits of learning an instrument in cognitive development.
Introduction
What do Albert Einstein, Werner Heisenberg, Max Planck, and Barbara McClintock all have in common? Other than their Nobel-prize winning research in their respective fields, all of these scientists share the love of playing a musical instrument. At an early age, Einstein followed after his mother in playing the violin; Heisenberg learned to read music to play the piano at the young age of four; Planck became gifted in playing the organ and piano; McClintock played the tenor banjo in a jazz band during her time at Cornell University [1]. While these researchers spent time honing in on their musical talent, they were engaging both their central and peripheral nervous systems. Playing an instrument requires the coordination of various parts of the brain working together. The motor system gauges meticulous movements to develop sound, which is then picked up by the auditory circuitry. Simultaneously, sensory information picked up by fingers and hands are delivered to the brain. Furthermore, individuals reading music use visual nerves to send information to the brain. This is then processed and interpreted to generate a response carried out by the extremities. All the while, the sound of music elicits an emotional response from the player.
Feedforward and feedback pathways of the brain are two auditory-motor interactions that are elicited while playing an instrument. Feedforward interactions are predictive manners that can influence motor responses. For example, tapping to the rhythm of a beat in anticipation of the upcoming flux and accents in the piece. Alternatively, feedback interactions are particularly important for stringed instruments such as the violin, where pitch changes and requires continuous management [12]. As shown in Figure 1, the musician must auditorily perceive each note and respond with suitably timed motor changes. All of these neurophysiological components raise questions as to how musical training can confer brain development. Longitudinal studies find that musical training can have an expansive benefit in the development of linguistics, executive function, general IQ and academic achievement [2].
Linguistic Skills
Music shares the same dorsal auditory pathway and processing center in the brain with all other sounds. This passageway is anatomically linked by the arcuate fasciculus, thus suggesting instrumental training will translate to the manifestation of language related skills. This unilateral pathway is central to an array of language related skills, including language development, second-language acquisition, and verbal memory [2]. According to Vaquero et al, “Playing an instrument or speaking multiple languages involve mapping sounds to motor commands, recruiting auditory and motor regions such as the superior temporal gyrus, inferior parietal, inferior frontal and premotor areas, that are organized in the auditory dorsal stream” [10].
Researchers studying the effects of acoustic sounds mimicking stop consonant speech on language development find that children learning instruments during the critical developmental period (0-6 years old), build lasting structural and organizational modification in their auditory system that will later affect language skills. Stop consonants include voiceless sounds /p/, /t/, and /k/, as well as voiced sounds /b/, /d/, and /g/. Dr. Strait and her colleagues describe their observations, “Given relationships between subcortical speech-sound distinctions and critical language and reading skills, music training may offer an efficient means of improving auditory processing in young children” [11].
Similarly, Dr. Patel suggests the existence of an overlap of shared brain connections amongst speech and music due to the requirement for accuracy while playing an instrument. Refining this finesse demands attentional training combined with self-motivation and determination. Repeated stimulation of these brain networks garner “emotional reinforcement potential” which is key to “… good performance of musicians in speech processing” [3].
Beyond stimulating auditory neurons, instrumental training has been shown to improve verbal memory. For example, in a comparative analysis, researchers find children who have undergone musical training demonstrate verbal memory advantages compared to their peers without training [4]. Furthermore, following up a year after the initial study, they found that continued practice led to substantial advancement in verbal memory, while those who discontinued failed to show any improvement. This finding is supported by Jakobson et al in which researchers correlate, “… enhanced verbal memory performance in musicians is a byproduct of the effect of music instruction on the development of auditory temporal-order processing abilities” [5].
In the case of acquiring a second language [abbreviated L2], a study conducted on 50 Japanese adults learning English finds, “… the ability to analyze musical sound structure would also likely facilitate the analysis of a novel phonological structure of an L2” [6]. These researchers further elaborate on the potential of improving English syntax with musical exercises concentrating on syntactic processes, such as “… hierarchical relationships between harmonic or melodic musical elements” [6]. Multiple studies have also identified music training to invoke specific structures in the brain also employed during language processing, including Heschl’s gyrus and Broca’s and Wernicke’s areas [2].
While music and language elements are stored in different regions of the brain, the common auditory pathway gives way to instrumental training strengthening linguistic development in multiple areas.
Executive Function
Executive function is the application of the prefrontal cortex to carry out tasks requiring conscious effort to attain a goal, particularly in novel scenarios [7]. This umbrella term includes cognitive control in attention and inhibition, working memory, and the ability to task switch. Psychologists Dr. Hannon and Dr. Trainor find formal musical education to be a direct implementation of, “… domain-specific effects on the neural encoding of musical structure, enhancing musical performance, music reading and explicit knowledge of the musical structure” [8]. The combination of domain-general development and executive functioning can influence linguistics, in addition to mathematical development. Whilst learning an instrument, musicians are required to actively read music notes considered to be a unique language and focus on the ability to translate their visual findings into precise mechanical maneuvers, demanding careful focus. All the while, lending attention to identify and remedy errors in harmony, tone, beat, and fingering. Furthermore, becoming well-trained requires scheduled rehearsals to build a foundational framework for operating the instrument while learning new technical elements and building robust spatial awareness. Thus, this explicit practice of executive function during these scheduled practice sessions are essential in sculpting this region of the prefrontal cortex.
General IQ and Academic Performance
While listening to music has been found to also confer academic advantage, the active practice of deliberately playing music in an ordered process grants musically apt individuals scholastic benefits absent in their counterparts. In a study conducted by Schellenberg, 144 six year-olds were assigned into one of four groups– music groups included keyboard or voice lessons while control groups provided drama classes or no lesson at all. After 36 weeks, using the Wechsler Intelligence Scale for Children–Third Edition (WISC-III),composed of varying assessments to evaluate intelligence, data supports all four groups having a significant increase in IQ. However, this can also be attributed to the start of grade school. Despite the general rise in IQ, individuals that received keyboard or voice lessons proved a greater jump in IQ. Schellenberg discusses this finding of musically-trained six year-olds demonstrating elevated IQ scores to mirror that of school attendance. He reasons participation in school increases one’s IQ and the smaller the learning setting is, the more academic success the student will be able to achieve. Similarly, music lessons are often taught individually or in small groups which mirrors a school structure, thus ensuing IQ boosts.
Possible Confounding Factors
While these studies have found a positive correlation amongst individuals learning musical instruments with varying brain maturation skills, these researchers mark the importance in taking into consideration confounding factors that often cannot be controlled for. These include socioeconomic level, prior IQ, education, and other activities participants are associated with. While many of these researchers worked to gather subjects with similarities in these domains, all of these external elements can play an essential role during one’s developmental period. Moreover, formally learning an instrument is often a financially hefty extracurricular activity, thus more affluent families with higher educational backgrounds can typically afford these programs for their children.
Furthermore, each study implemented varying practice times, music training durations, and instruments participants were required to learn. The data gathered from these findings can elicit results that may not be reproducible under different parameters.
Beyond these external factors, one must also consider each participant’s willingness to learn the instrument. If one does not hold the desire or motivation to become musically educated, spending the required time playing the instrument does not necessarily correlate to positive results with development as those regions of the brain are not actively utilized.
Conclusion
Numerous studies have been carried out to demonstrate the varying benefits music training can provide for cognitive development. Extensive research has proven that the process of physically, mentally, and emotionally engaging oneself to learning an instrument can award diverse advantages to the maturing brain. The discipline and rigor needed to garner expertise in playing a musical instrument is unconsciously translated to the experimental setting. Likewise, the unique melodious tunes found in each piece sparks creativity to propose imaginative visions. While instrumental education does not fully account for Einstein, Heisenberg, Planck and McClintock’s scientific success, this extracurricular activity has been shown to provide a substantial boost in critical thinking.
References
- “The Symphony of Science.” The Nobel Prize, March 2019. https://www.symphonyofscience.com/vids.
- Miendlarzewska, Ewa A., and Wiebke J. Trost. “How Musical Training Affects Cognitive Development: Rhythm, Reward and Other Modulating Variables.” Frontiers in Neuroscience 7 (January 20, 2014). https://doi.org/10.3389/fnins.2013.00279.
- Patel, Aniruddh D. “Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis.” Frontiers in Psychology 2 (June 29, 2011): 1–14. https://doi.org/10.3389/fpsyg.2011.00142.
- Ho, Yim-Chi, Mei-Chun Cheung, and Agnes S. Chan. “Music Training Improves Verbal but Not Visual Memory: Cross-Sectional and Longitudinal Explorations in Children.” Neuropsychology 17, no. 3 (August 2003): 439–50. https://doi.org/10.1037/0894-4105.17.3.439.
- Jakobson, Lorna S., Lola L. Cuddy, and Andrea R. Kilgour. “Time Tagging: A Key to Musicians Superior Memory.” Music Perception 20, no. 3 (2003): 307–13. https://doi.org/10.1525/mp.2003.20.3.307.
- Slevc, L. Robert, and Akira Miyake. “Individual Differences in Second-Language Proficiency.” Psychological Science 17, no. 8 (2006): 675–81. https://doi.org/10.1111/j.1467-9280.2006.01765.x.
- Banich, Marie T. “Executive Function.” Current Directions in Psychological Science 18, no. 2 (April 1, 2009): 89–94. https://doi.org/10.1111/j.1467-8721.2009.01615.x.
- Hannon, Erin E., and Laurel J. Trainor. “Music Acquisition: Effects of Enculturation and Formal Training on Development.” Trends in Cognitive Sciences 11, no. 11 (November 2007): 466–72. https://doi.org/10.1016/j.tics.2007.08.008.
- Schellenberg, E. Glenn. “Music Lessons Enhance IQ.” Psychological Science 15, no. 8 (August 1, 2004): 511–14. https://doi.org/10.1111/j.0956-7976.2004.00711.x.
- Vaquero, Lucía, Paul-Noel Rousseau, Diana Vozian, Denise Klein, and Virginia Penhune. “What You Learn & When You Learn It: Impact of Early Bilingual & Music Experience on the Structural Characteristics of Auditory-Motor Pathways.” NeuroImage 213 (2020): 116689. https://doi.org/10.1016/j.neuroimage.2020.116689.
- Strait, D. L., S. Oconnell, A. Parbery-Clark, and N. Kraus. “Musicians Enhanced Neural Differentiation of Speech Sounds Arises Early in Life: Developmental Evidence from Ages 3 to 30.” Cerebral Cortex 24, no. 9 (2013): 2512–21. https://doi.org/10.1093/cercor/bht103.
- Zatorre, Robert J., Joyce L. Chen, and Virginia B. Penhune. “When the Brain Plays Music: Auditory–Motor Interactions in Music Perception and Production.” Nature Reviews Neuroscience 8, no. 7 (July 2007): 547–58. https://doi.org/10.1038/nrn2152.