Potential Methods of Life Detection on Ocean Worlds
By Ana Menchaca, Biochemistry and Molecular Biology ‘20
Author’s Note: As a biochemistry major who is interested in pursuing astrobiology research, I initially wrote this literature review for an assignment in my Writing in Biology course. Methods of life detection and what we know about life is a field in which we still have much to discover and explore, given Earth as our only example, and I hope to be involved in this exploration myself in the future.
Abstract
Ocean worlds, such as Enceladus, Saturn’s largest moon, provide intriguing environments and the potential for life as we continue to explore the Solar System. Organic compounds have been discovered in plumes erupting from the moon during flybys and point towards the presence of amino acids and other precursors of life. The data collected from these flybys, in turn, has been used to calculate the theoretical amounts of amino acids present in the oceans of Enceladus. While this data is intriguing, it relies on a limited definition of life, based on organisms and macromolecules that have only been observed on Earth. Other methods, including using nucleic acids or nanopores for detection, have been proposed. Nucleic acids utilize binding to identify a broad spectrum of compounds, while nanopores utilize the measurement of ionic flow. These alternative methods allow for a broader spectrum of compound detection than terran-based methods, creating the potential to detect unfamiliar kinds of life. Research into more holistic detection should continue.
Keywords: astrobiology, life detection, planetary exploration, biosignatures
Introduction
The search for life elsewhere in the Solar System is becoming increasingly relevant, and more importantly, feasible. Icy moons, such as Europa, Titan, and Enceladus, have been identified as holding the greatest potential for extraterrestrial life within the Solar System [1]. Europa and Enceladus, with seas below icy crusts, have geysers with unidentified fluctuations, along with evidence of tidal warming and geologic activity [2]. The Cassini spacecraft identified these geysers on Enceladus during flyby in 2006, spouting from four specific fractures on the surface of the moon [3]. Analysis of the vapors produced show that they mainly consist of water, along with CO2, N2, CO, CH4, salts, other organic compounds, and silica particulates [3, 4]. This points towards evidence of hydrothermal activity, the movement of heated water, which has the potential to provide necessary energy for life [4]. Additionally, the discovery of volatile aliphatic hydrocarbons in these plumes potentially indicate some degree of organic evolution within the seas of Enceladus [4].
However, there is no consensus yet on how to detect and identify life [1, 2]. Some scientists propose looking for life based on the shared ancestry hypothesis, which proposes all life shares the same genetic ancestry [2]. Others propose there is a potential for extraterrestrial life to present variations from terran life that we may neither be able to recognize nor detect with our current methods of biochemical detection [5]. Experimentally, the potential for nucleic acids based on different backbones has already been identified [5]. Here, we examine the range of proposed methods for identifying extraterrestrial life.
Proposed theories and methods based on current knowledge of life
Collection of amino acids
Current data collected from Enceladus’ plumes presents organic compounds that provide potential evidence of amino acid synthesis taking place in the oceans of the moon. Steel et al. used the thermal flux at the moon’s South Polar Terrain (SPT) to predict the hydrogen produced by hydrothermal activity. The predicted rates of production ranged between 0.63 and 33.8 mol/s of H2, and from there, amino acid production rates were estimated to be between 8.4 and 449.4 mmol/s [4]. Annual biomass production was also modelled in these calculations and estimated at 4 · 104 to 2 · 106 kg/year, compared to 1014 kg/year on Earth. These estimates, however, are dependent on the environment being an abiotic, steady state ocean; the actual production rates could be different if there is life present in Enceladus’ ocean [4].
While this limits our predictions of Enceladus’ true environment, it still provides a basis that can be extrapolated for use in the design of modules to be sent out. One such module that has been proposed is the Enceladus Organic Analyzer, which is designed to analyze amino acids through chain length variations [3]. To properly collect and analyze the amino acids proposed to be in Enceladus’ oceans, there are several requirements. The sample must be collected from the subsurface ocean with minimal degradation, isomerization, racimerization, and contamination of biological molecules and amino acids [3]. A collection chamber made of aluminum has been modeled, designed to reduce the thermal heating caused by collection of samples, in order to best preserve them. If the moon contains bacteria as postulated, this design will lyse and kill collected cells through either heat or shock but release their more stable chemical components for analysis [3]. This depends highly upon current knowledge as a starting place, focusing with a limited scope on amino acid and cell identification. Another such method using cell identification is digital holographic microscopy.
Digital holographic microscopy
The development and improvements of microscopy, while beneficial, depend heavily on the assumption that life in the same form as terran cells will be found. Investigators propose digital holographic microscopy (DHM) as a more efficient alternative over traditional light microscopy [1]. This technology produces a 100-fold improvement in the depth of field and is able to monitor both intensity and phase of images. However, even with the increase in resolution, differentiation of cells and cell-shaped structures is difficult, even before taking into account potential differences in extraterrestrial life. Refraction, an emerging field, was able to differentiate experimentally between crystalline structures and cells in the study’s Arctic samples. While the technology can be miniaturized and discriminate between cells and minerals, it depends highly on actual capture of a sufficient number of cells from plumes. This experimental data was obtained using dye-less techniques, which still function in the context of organisms without DNA or RNA, and refraction with the potential to differentiate structures [1]. DHM is both useful for detection of cells based on collected data and for the potential discovery of organisms without nucleic acids as we know them.
Expanding outside the current knowledge of life
Detection using nucleic acids
Other experiments and proposals, while not explicitly targeting life outside the current perceptions, propose a more holistic collection of data. This carries the potential of identifying life outside our current scope, as opposed to focusing directly on known amino acids and cells. Using a broader concept of nucleic acids as a means of detection and identification is one such method.
Oligonucleotides, through forming secondary and tertiary structures, have specificity and affinity to a wide variety of molecules, both organic and inorganic [2]. Even at a length of only 15 base pairs and within complex mixtures, these molecules can bind to what is being analyzed, or the analytes. Systematic evolution of ligands by exponential enrichment (SELEX) is a process that can identify oligonucleotides that bind very specifically to analytes. However, this method proposes the use of low affinity and low specificity nucleic acids that are typically discarded in this process. Unlike antibodies, this method requires no prior knowledge of the surface attributes or the three-dimensional structure of the molecule that is being bound. Through accumulating a wide range of binding sequences and statistical analysis, a vast number of compounds can be collected and environmental variations identified. Additionally, this method posits that the optimal means of capturing sequences is through proximity ligation assay (PLA), a technique currently used in scientific fields. PLA purifies the binding species based on ligation and amplification, producing a lower background than sieving, which separates based on size. It is also capable of capturing a vast range of sequences and structures, including inorganic, organic, or polymeric molecules [2], and thus is more capable of providing holistic results.
Nanopore-based sensing
Nanopore-based sensing, presented as an alternative to current methods, detects and analyzes genetic information carriers in watery systems without making assumptions about its chemical composition [5]. This system relies upon the restrictions placed on these sorts of compounds within watery systems, as the repeating charge of backbones keeps polymer strands from folding and favors solubility in water. A nanopore is a hole with a diameter of a few nanometers, surrounded by an insulating membrane within two chambers containing an electrolyte solution. Due to its diameter, only single-stranded DNA can pass through the nanopore, allowing for slow movement and characteristic signals that produce data clearly distinguishable from other molecular data. This method can detect and analyze molecules by measuring the ionic flow across the membrane. While biological nanopores are able to detect and resolve individual terran bases, nonbiological, solid-state nanopores provide the same function, avoiding the limitations of detecting terran molecules that may be present in biological nanopores. Graphene, with its crystalline form, can have its membrane adjusted to only accommodate one nucleotide at a time or can be sculpted to produce varying sizes of nanopores. This could allow for the detection of other polymers with chemical and sterical properties that vary from currently known polymers. This approach has the potential to analyze a broad range of molecules without any assumptions regarding the external structure’s outside charge and linearity. Few identified nonbiological polymers are structured this way, so any data picked up by nanopores would be significant [5].
There are, however, limitations to this approach. Nucleic acids have high electroporation speeds, making it necessary to find methods of slowing these speeds down for accuracy [5]. Electroporation uses an electrical charge to make the cell membrane more permeable. Potential methods include control through physical factors, such as temperature, salinity, and viscosity. Conditions of collection on other planets also pose the problem of extreme dilution of the target molecules, which depends on a large number of variables [5].
Conclusion
Radiation and stability are major concerns in moving forward with any sort of data collection from extraterrestrial worlds. Mechanisms and samples are potentially open to the detrimental effects of extreme vacuum and solar radiation [5]. These problems should be addressed in conjunction with the technology actually being used for analysis to produce the most beneficial results. Some of these issues have been addressed to some extent, such as using microfluidics for collection because they are unaffected by the vacuum of space due to their own internal surface tension [3]. However, these problems need to be explored further in all cases to ensure that each method can function in uncontrolled or non-terran environments.
The presented data indicates potential for the existence of amino acids in these environments. Even though prediction and detection of these amino acids seems a logical step forward, the development of further, broader technology for life detection should also be pursued. Current knowledge is limited by the qualities of terran life; while that is a well-supported starting point, methods that leave open the potential of deviation from this point may allow for the detection of otherwise overlooked forms of life. Moving forward, it seems only logical to combine these methods that can detect both the known and the unknown, allowing scientists to gather the widest possible array of data in future missions, especially on promising worlds like Enceladus.
References
- Bedrossian M, Lindensmith C, Nadeau JL. 2016. Digital Holographic Microscopy, a Method for Detection of Microorganisms in Plume Samples from Enceladus and Other Icy Worlds. Astrobiology 17(9):913–925.
- Johnson SS, Anslyn EV, Graham HV, Mahaffy PR, Ellington AD. 2018. Fingerprinting Non-Terran Biosignatures. Astrobiology 18(7):915–922.
- Mathies RA, Razu ME, Kim J, Stockton AM, Turin P, Butterworth A. 2016. Feasibility of Detecting Bioorganic Compounds in Enceladus Plumes with the Enceladus Organic Analyzer. Astrobiology 17(9):902–912.
- Steel EL, Davila A, Mckay CP. 2017. Abiotic and Biotic Formation of Amino Acids in the Enceladus Ocean. Astrobiology 17(9):862–875.
- Rezzonico F. 2014. Nanopore-Based Instruments as Biosensors for Future Planetary Missions. Astrobiology 14(4):344–351.
The Scientific Cost of Progression: CAR-T Cell Therapy
By Picasso Vasquez, Genetics and Genomics ‘20
Author’s Note: One of the main goals for my upper division UWP class was to write about a recent scientific discovery. I decided to write about CAR-T cell therapy because this summer I interned at a pharmaceutical company and worked on a project that involved using machine learning to optimize the CAR-T manufacturing process. I think readers would benefit from this article because it talks about a recent development in cancer therapy.
“There’s no precedent for this in cancer medicine.” Dr. Carl June is the director of the Center for Cellular Immunotherapies and the director of the Parker Institute for Cancer Immunotherapy at the University of Pennsylvania. June and his colleagues were the first to use CAR-T, which has since revolutionized personal cancer immunotherapy [1]. “They were like modern-day Lazarus cases,” said Dr. June, referencing the resurrection of Saint Lazarus in the Gospel of John and how it parallels the first two patients to receive CAR-T. CAR-T, or chimeric antigen receptor T-cell, is a novel cancer immunotherapy that uses a person’s own immune system to fight off cancerous cells existing within their body [1].
Last summer, I had the opportunity to venture across the country from Davis, California, to Springhouse, Pennsylvania, where I worked for 12 weeks as a computational biologist. One of the projects I worked on was using machine learning models to improve upon the manufacturing process of CAR-T, with the goal of reducing the cost of the therapy. The manufacturing process begins when T-cells are collected from the hospitalized patient through a process called leukapheresis. In this process, the T-cells are frozen and shipped to the manufacturing facility, such as the one I worked at this summer, where they are then grown up in large bioreactors. On day three, the T-cells are genetically engineered to be selective towards the patient’s cancer by the addition of the chimeric antigen receptor; this process turns the T-cells into CAR-T cells [2]. For the next seven days, the bioengineered T-cells continue to grow and multiply in the bioreactor. On day 10, the T-cells are frozen and shipped back to the hospital where they are injected back into the patient. Over the 10 days prior to receiving the CAR-T cells, the patient is given chemotherapy to prepare their body for inoculation of the immunotherapy [2]. This whole process is very expensive and as Dr. June put it in his TedMed talk, “it can cost up to 150,000 dollars to make the CAR-T cells for each patient.” But the cost does not stop there; when you include the cost of treating other complications, the cost “can reach one million dollars per patient” [1].
The biggest problem with fighting cancer is that cancer cells are the result of normal cells in your body gone wrong. Because cancer cells look so similar to the normal cells, the human body’s natural immune system, which consists of B and T-cells, is unable to discern the difference between them and will be unable to fight off the cancer. The concept underlying CAR-T is to isolate a patient’s T-cells and genetically engineer them to express a protein, called a receptor, that can directly recognize and target the cancer cells [2]. The inclusion of the genetically modified receptor allows the newly created CAR-T cells to bind cancer cells by finding the conjugate antigen to the newly added receptor. Once the bond between receptor and antigen has been formed, the CAR-T cells become cytotoxic and release small molecules that signal the cancer cell to begin apoptosis [3]. Although there has always been drugs that help your body’s T-cells fight cancer, CAR-T breaks the mold by showing great efficacy and selectivity. Dr. June stated “27 out of 30 patients, the first 30 we treated, or 90 percent, had a complete remission after CAR-T cells.” He then goes on to say, “companies often declare success in a cancer trial if 15 percent of the patients had a complete response rate” [1].
As amazing as the results of CAR-T have been, this wonderful success did not happen overnight. According to Dr. June, “CAR T-cell therapies came to us after a 30-year journey, along with a road full of setbacks and surprises.” One of these setbacks is the side effects that result from the delivery of CAR-T cells. When T-cells find their corresponding antigen, in this case the receptor on the cancer cells, they begin to multiply and proliferate at very high levels. For patients who have received the therapy, this is a good sign because the increase in T-cells indicates that the therapy is working. When T-cells rapidly proliferate, they produce molecules called cytokines. Cytokines are small signaling proteins that guide other cells around them on what to do. During CAR-T, the T cells rapidly produce a cytokine called IL-6, or interleukin-6, which induces inflammation, fever, and even organ failure when produced in high amounts [3].
According to Dr. June, the first patient to receive CAR-T had “weeks to live and … already paid for his funeral.” When he was infused with CAR-T, the patient had a high fever and fell comatose for 28 days [1]. When he awoke from his coma, he was examined by doctors and they found that his leukemia had been completely eliminated from his body, meaning that CAR-T had worked. Dr. June reported that “the CAR-T cells had attacked the leukemia … and had dissolved between 2.9 and 7.7 pounds of tumor” [1].
Although the first patients had outstanding success, the doctors still did not know what caused the fevers and organ failures. It was not until the first child to receive CAR-T went through the treatment did they discover the cause of the adverse reaction. Emily Whitehead, at six years old, was the first child to be enrolled in the CAR-T clinical trial [1]. Emily was diagnosed with acute lymphoblastic leukemia (ALL), an advanced, incurable form of leukemia. After she received the infusion of CAR-T, she experienced the same symptoms of the prior patient. “By day three, she was comatose and on life support for kidney failure, lung failure, and coma. Her fever was as high as 106 degrees Fahrenheit for three days. And we didn’t know what was causing those fevers” [1]. While running tests on Emily, the doctors found that there was an upregulation of IL-6 in her blood. Dr. June suggested that they administer Tocilizumab to combat increased IL-6 levels. After contacting Emily’s parents and the review board, Emily was given Tocilizumab and “Within hours after treatment with Tocilizumab, Emily began to improve very rapidly. Twenty-three days after her treatment, she was declared cancer-free. And today, she’s 12 years old and still in remission” [1]. Currently, two versions of CAR-T have been approved by the FDA, Yescarta and Kymriah, which treat diffuse large B-cell lymphoma (DLBCL) and acute lymphoblastic leukemia (ALL) respectively [1].
The whole process is very stressful and time sensitive. This long manufacturing task results in the million-dollar price tag on CAR-T and is why only patients in the worst medical states can receive CAR-T [1]. However, as Dr. June states, “the cost of failure is even worse.” Despite the financial cost and difficult manufacturing process, CAR-T has elevated cancer therapy to a new level and set a new standard of care. However, there is still much work to be done. The current CAR-T drugs have only been shown to be effective against liquid based cancers such as lymphomas and non-effective against solid tumor cancers [4]. Regardless, research into improving the process of CAR-T continues to be done both at the academic level and the industrial level.
References:
- June, Carl. “A ‘living drug’ that could change the way we treat cancer.” TEDMED, Nov. 2018, ted.com/talks/carl_june_a_living_drug_that_could_change_the_way_we_treat_cancer.
- Tyagarajan S, Spencer T, Smith J. 2019. Optimizing CAR-T Cell Manufacturing Processes during Pivotal Clinical Trials. Mol Ther. 16: 136-144.
- Maude SL, Laetch TW, Buechner J, et al. 2018. Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. N Engl J Med. 378: 439-448.
- O’Rourke DM, Nasrallah MP, Desai A, et al. 2017. A single dose of peripherally infused EGFRvIII-directed CAR T cells mediates antigen loss and induces adaptive resistance in patients with recurrent glioblastoma. Sci Transl Med. 9: 399.
The Similarity of Human’s Microbiomes with Dogs
By Mangurleen Kaur, Biological Science, 23’
Author’s Note: In one of my classes of basic biology, I got to learn about microbes. That class discussed some relationships between microbes and between human beings. One of the points that stuck in my mind was the relationship of microbes between humans and one of our favorite pets, dogs. By researching this topic, I found it so astounding that I decided to write about it. I hope this piece will be interesting not only for science-lovers but also for the general public.
Both, inside and out, our bodies harbor a huge array of microorganisms. These microorganisms are a diverse group of generally minute life forms, which are called microbiota when they are found within a specific environment. Microbiota can refer to all the microorganisms found in an environment including bacteria, viruses, archaea, protozoa, and fungi. Furthermore, the collection of genomes from all the microorganisms found in a particular environment is referred to as a microbiome. According to the Human Microbiome Project (HMP), this plethora of microbes contribute more genes responsible for human survival than humans contribute. Researchers also estimated that the human microbiome consisted of 360 times more bacterial genes than human genes. Their results show that this contribution by microbes is critical for human survival. For instance, in the gastrointestinal tract bacterial genes are present, which allow humans to digest and absorb nutrients that otherwise would be unavailable. In addition to this, microbes also assist in the synthesis of many beneficial compounds, like vitamins and anti-inflammatory agents which our genome cannot produce. (4)
Where does this mini-ecosystem start from? The microbiome comes in our body as soon as we come out from the mother’s womb, we acquire them from the mother’s vagina and then, later on, by breastfeeding which plays a great role in making the microbes’ own unique community. There are several factors that influence the microbiome which include physiology, food, lifestyle, age, and environment. These are not only present in humans, but also in most animals and play a significant role in their health. For instance, gastrointestinal microorganisms exist in symbiotic associations with animals. Microorganisms in the gut assist in the digestion of feedstuffs, help protect the animal from infections, and some microbes even synthesize and provide essential nutrients to their animal host. This gives us an idea of how important these microorganisms are to our living system as a whole.(3)
Besides the human’s strong emotional connection with the dogs, there is also a biological relationship between the human and dog’s interactions. In context of this interesting relationship, research has been conducted. Computational biologist Luis Pedro Coelho and his colleagues at the European Molecular Biology Laboratory, in collaboration with Nestlé Research, studied the gut microbiome (the genetic material belonging to the microbiota) of beagles and retrievers. They found that the gene content of the dog’s microbiome showed more similarities to the human gut microbiome than to the microbiomes of pig or mice. When researchers mapped the gene content of the dog, mouse and pig microbiome in contrast to the human gut genes, they found that respectively 63%, 20% and 33% overlapped.(5) This shows the extensive similarities between human and dog’s gut microbiomes in comparison to other animals. Speaking on the discovery, Luis Pedro Coelho says: “We found many similarities between the gene content of the human and dog gut microbiome. The results of this comparison suggest that we are more similar to man’s best friend than we originally thought.” (1)
The University of Colorado Boulder did a study on the types of microbes present on the different parts of humans, to better understand the diversity and its significance for the human’s body. They conducted the study on 60 American families in which they sampled 159 people and 36 dogs. The team took samples from tongue, forehead, right and left palm and fecal samples to detect individual microbial communities. Through research, the researcher learned that people who own dogs are much more likely to share the same kinds of these “good” bacteria with their dogs. They have also learned that children who are raised with dogs are less likely than others to develop a range of immune-related disorders, including asthma and allergies. “One of the biggest surprises was that we could detect such a strong connection between their owners and pets,” said Knight, a faculty member at CU-Boulder’s BioFrontiers Institute.(6) The results found that adults who have a dog and they live together, share the greatest number of skin phylotypes while adults who neither have a dog nor live together share the least.
The University of Arizona is also conducting another research study, with some other universities including UC San Diego, in which they are seeking healthy people from Arizona age 50 or older, who have not lived with dogs for at least the past 6 months. Then they are selecting persons who would like to live with the assigned dogs. The goal of the study is to see whether the dogs enhance the health of older people and work as probiotics (good bacteria). But this research is ongoing and the outcomes are not yet released. Rob Knight, Professor of Pediatrics and Computer Science & Engineering at UC San Diego and his lab studied microbiomes. Knight and his team found that the microbial community on adult skin are on average more similar to those of their own dogs than to others. They also found that cohabiting couples share more microbes with one another if they have a dog, compared with couples who don’t have a dog. Their research suggests that a dog’s owner can be identified just by analyzing the microbial diversity of the dog and its human, as they share microbiomes. These studies are finding a critical relationship that is very helpful in microbiology and the overall health field in science. (2)
These studies reveal the various interesting relationships of microbiomes with us and other living beings. So far, the studies discussed how dog’s microbiomes are shared by the owner and how gene sequencing helps us to understand these connections. The growing understanding of this connection with microorganisms raises many other outstanding questions like what are the health benefits of a dog to a human? How can they help in preventing certain chronic diseases? This represents an exciting challenge for scientists and researchers to refine their understanding of microbiomes and find answers to these further emerging questions.
Work Cited
- “NIH Human Microbiome project defines normal bacterial makeup of the body”. National Institutes of Health, U.S. Department of Health and Human Services. www.nih.gov. Published on August 31, 2015. Acessed May 10, 2020
- Ganguly, Prabarna. “Microbes in us and their role in human health and disease”. www.Genome.gov. Published on May 29, 2019. Accessed May 10, 2020.
- “Dog microbiomes closer to humans’ than expected”. Research in Germany, Federal Ministry of Education and Research. www.researchingermany.org. Published on April 20, 2018. Accessed May 11, 2020.
- Trevino, Julissa. “A Surprising Way Dogs Are Similar to Humans.” www. Smithsonianmag.com. Published on April 23, 2018. Accessed February 11, 2020.
- Song, Se Jin, Christian Lauber, Elizabeth K Costello, Catherine A Lozupone, Gregory Humphrey, Donna Berg-Lyons, Gregory Caporaso, et al. “Cohabiting Family Members Share Microbiota with One Another and with Their Dogs.” eLife. eLife Sciences Publications, Ltd. elifesciences.org. Published on April 16, 2013. Accessed May 11, 2020.
- Sriskantharajah, Srimathy. “Ever feel in your gut that you and your dog have more in common than you realized?” www.biomedcentral.com. Published on April 11, 2018. Accessed February 11, 2020.
Use of Transgenic Fish and Morpholinos for Analysis of the Development of the Hematopoietic System
By Colleen Mcvay, Biotechnology, 2021
Author’s Note: I wrote this essay to review the methods of utilizing Zebrafish as a model for understanding the mechanisms underlying the development of blood (hematopoietic) stem cells, for my Molecular Genetics Class. I would love for readers to better understand how the use of transgenic zebrafish and morpholinos have advanced our knowledge on the embryonic origin, genetic regulation and migration of HSCs during early embryonic development.
Introduction
Hematopoietic stem cells, or the immature cells found in the peripheral blood and bone marrow, develop during embryogenesis and are responsible for the constant renewal of blood throughout an organism. Hematopoietic development in the vertebrate embryo arises in consecutive, overlapping waves, described as primitive and definitive waves. These waves are distinguished based on the type of specialized blood cells that are generated and each occurs in distinct anatomical locations (7). In order to visualize and manipulate these embryonic developmental processes, a genetically tractable model must be used. Although many transgenic animals provide adequate models for hematopoiesis and disease study, the zebrafish (Danio rerio) proves to be far superior because of their easily visualized and manipulated embryonic developmental processes (6). Through the use of diagrams and analysis, this discussion will expand upon the mechanisms of the development of hematopoietic stem cells and explain how this knowledge is enriched through the use of transgenic animals and morpholinos, such as the zebrafish.
The Zebrafish Model
The zebrafish (Danio rerio) model has proven to be a powerful tool in the study of hematopoiesis and offers clear advantages over other vertebrate models, such as the mouse (Mus musculus). These advantages include the conserved developmental and molecular mechanisms with higher vertebrates, the optical transparency of its embryo and larvae, the genetic and experimental convenience of the fish, its external fertilization allowing for in vivo visualization of embryogenesis, and its sequential waves of hematopoiesis (9). Additionally, zebrafish allow for clear visualization of the phenotypic changes that occur during the transition from the embryonic to adult stages. This is beneficial in understanding and visualizing the hematopoiesis sequential-wave mechanism, as explained below (8). Mouse models on the other hand are embryonic lethal for many hematopoietic transcription factors, meaning the cells die in the embryo, therefore inhibiting that same visualization (12).
An Overview of Hematopoietic Development
The development of blood in all vertebrates involves two waves of hematopoiesis: the primitive wave and the definitive wave (4). Primitive hematopoiesis, involving an erythroid progenitor (or a cell that gives rise to megakaryocytes and erythrocytes), happens during early embryonic development and is responsible for producing erythroid and myeloid cell populations (5). The primitive wave is transitory, and its main purpose is to produce red blood cells to assist tissue oxygenation. These erythroid progenitor cells first appear in blood islands in the extra-embryonic yolk sac, however, they are neither pluripotent nor do they have renewal ability (11). Later in development (varying points in development for different species), definitive hematopoiesis produces hematopoietic stem and progenitor cells (HSPCs), that generate multipotent blood lineages of the adult organism (7). The HSC’s originate in the aorta-gonad-mesonephros (AGM) region of the developing embryo, where they then migrate to the fetal liver and bone marrow [Figure 1.]
Figure 1: Stages of Embryonic Hematopoiesis
This figure shows the establishment of primitive and definitive hematopoietic stem cells (HSC) during embryonic development. The first HSC’s appear in the blood islands in the extraembryonic yolk sac. The primitive wave is transient, and the successive definitive wave starts intraembryonically in the aorta-gonad-mesonephros (AGM) region. The definitive HSC’s are multipotent and migrate to the fetal liver where they proliferate and seed bone marrow. There is a systematic circulation of the embryonic hematopoiesis.
Hematopoietic Development in the Zebrafish Model:
Like all vertebrates, zebrafish have sequential waves of hematopoiesis. However, hematopoiesis in zebrafish occurs in a distinct manner compared to other vertebrate models, with its primitive HSC’s being generated intra-embryonically, in the ventral mesoderm tissue called the intermediate cell mass (ICM) (2). Throughout this primitive wave, the anterior part of the embryo creates myeloid cells while the posterior creates mostly erythrocytes, both of which circulate throughout the embryo from 24 hours post-fertilization (10). The next step involves hematopoiesis occurring in the aorta-gonad mesonephros (AGM) region, which is followed by the emergence of the HSC’s from the ventral wall of the dorsal aorta. The HSC’s then migrate to the posterior region in the tail called the caudal hematopoietic tissue (CHT). Finally, from 4 days post-fertilization, lymphopoiesis initiates in the thymus and HSC’s move to the kidney marrow (functionally equivalent to bone marrow in mammals) (11) (10)[Figure 2]. Although the anatomical sites of hematopoiesis are different in zebrafish and mammals, the molecular mechanisms and genetic regulation are highly conserved, permitting translation to mammals in many different ways. First, because zebrafish embryos can survive without red blood cells for a long time by passive diffusion, they are ideal for the identification of mutations that would be embryonic lethal in mice (9). These zebrafish mutants have been able to reveal genes that are critical components of human blood diseases and allow for the recognition of toxicity and embryonic lethality at an early stage of drug development. Additionally, the zebrafish model is amenable to forward genetic screens that are infeasible in any other vertebrate model simply due to cost and space requirements. Finally, zebrafish embryos are permeable to water-soluble chemicals, making them ideal for high-throughput screenings of novel bioactive compounds.
A:
B:
Figure 2: Hematopoiesis Development in the Zebrafish Model
A.) In embryonic zebrafish development, the sequential sites of hematopoiesis. Development first occurs in the intermediate cell mass (ICM), next in the aorta-gonad-mesonephros (AGM), and then in the causal hematopoietic tissue (CHT). Later hematopoietic cells are expressed in the thymus and kidney (Modified from Orkin and Zon, 2008).
B.) Timeline for the developmental windows for hematopoietic sites in the zebrafish (Modified from Orkin and Zon, 2008).
Transgenic Zebrafish & Morpholinos to Understand Genetic HSC Regulation and Migration
Transgenic zebrafish and morpholinos are easily manipulated and visualized through microinjection, chemical screening, and mutagenesis, all of which aid in identifying hematopoietic gene mutations and understanding gene regulation and migration in a vertebrate model. Epigenetic analysis of these mutations (through RNA sequencing, CHIP, microarray, and selective inhibition of a gene) have identified critical components to blood development that describe both the functions of these genes within hematopoiesis, and also describe phenotypes associated with defective development (1). Morpholinos target sequences at the transcriptional start site and allow for the selective inhibition of a targeted gene and analysis of the regulatory sequences in this mutant (Martin et al., 2011). Large-scale screening techniques (chemical suppressor screens, etc) of these mutations have identified many small molecules capable of rescuing hematopoietic defects and stopping disease, along with identifying new pathways of regulation (9). Although transgenic organisms have different origin sites and migratory patterns than mammalian hematopoiesis, the genetic regulation of HSC development and lineage specification is conserved, allowing for insights into the pathophysiology of disease.
Conclusion
The zebrafish is an invaluable vertebrate model for studies of hematopoiesis because of its amenability to genetic manipulations and its easily viewed embryonic developmental processes. This organism has become increasingly important in understanding the genetic and epigenetic mechanisms of blood cell development and the information produced is vital for the translation into regenerative medicine applications. Although more research is needed into specifics of HSC differentiation and self-renewal, zebrafish sufficiently allow for newly identified mutations and translocations of human hematopoietic diseases and cancers to be visualized and analyzed, unlike any other model organism. With this analysis, a more complete understanding of the molecular mechanisms of certain hematopoietic diseases can be made, thus aiding in the process of new treatments.
References
- Boatman S, Barrett F, Satishchandran S, et al. Assaying hematopoiesis using zebrafish. Blood Cells Mol Dis 2013;51:271–276
- Detrich H. W et al. (1995). Intraembryonic hematopoietic cell migration during vertebrate development. Proc Natl Acad Sci USA 92: 10713- 10717.
- E. Dzierzak and N. Speck, “Of lineage and legacy: the development of mammalian hematopoietic stem cells,” Nature Immunology, vol. 9, no. 2, pp. 129–136, 2008.
- Galloway J. L., Zon L. I. (2003). Ontogeny of hematopoiesis: examining the emergence of hematopoietic cells in the vertebrate embryo. Curr. Top. Dev. Biol. 53, 139-158
- Kumar, Akhilesh et al. “Understanding the Journey of Human Hematopoietic Stem Cell Development.” Stem cells international vol. 2019 2141475. 6 May. 2019, doi:10.1155/2019/2141475
- Gore, Aniket V et al. “The zebrafish: A fintastic model for hematopoietic development and disease.” Wiley interdisciplinary reviews. Developmental biology vol. 7,3 (2018): e312. doi:10.1002/wdev.312
- Jagannathan-Bogdan, Madhumita, and Leonard I Zon. “Hematopoiesis.” Development (Cambridge, England) vol. 140,12 (2013): 2463-7. doi:10.1242/dev.083147
- de Jong, J.L.O., and Zon, L.I. (2005). Use of the Zebrafish System to Study Primitive and Definitive Hematopoiesis. Annu. Rev. Genet. 39, 481–501.
- Jing, L., and Zon, L.I. (2011). Zebrafish as a model for normal and malignant hematopoiesis. Dis. Model. &Amp; Mech. 4, 433 LP – 438.
- Orkin, Stuart H, and Leonard I Zon. “Hematopoiesis: an evolving paradigm for stem cell biology.” Cell vol. 132,4 (2008): 631-44. doi:10.1016/j.cell.2008.01.025
- Paik E. J., Zon L. I. (2010). Hematopoietic development in the zebrafish. Int. J. Dev. Biol. 54, 1127-1137
- Sood, Raman, and Paul Liu. “Novel insights into the genetic controls of primitive and definitive hematopoiesis from zebrafish models.” Advances in hematology vol. 2012 (2012): 830703. doi:10.1155/2012/830703
The Wood Wide Web: Underground Fungi-Plant Communication Network
By Annie Chen, Environmental Science and Management ’19
Author’s note: When people think of ecosystems, trees and animals usually come to mind. However, most often we neglect an important part of the ecosystem — Fungi. Without us noticing, the fungi stealthily connects the organisms underground, creating a communication network that helps organisms interact with one another.
Picture yourself walking your dog in a quiet, peaceful natural forest, where you imagine the two of you as the only organisms capable of interacting with one another here. However, you are not alone; the plants can communicate and those trees and grasses are always speaking to each other without you taking notice. The conversation between vascular plants in this forest started before any of us are old enough to remember, and will likely continue if this forest is untouched. These conversations between seemingly disconnected organisms have helped this forest survive and thrive to become what you see today. You might wonder, what do these plants talk about, and most importantly, how do the plants communicate if they cannot move freely and have no vocal cords? The secret lies underground in an extensive network.
The underground network connects different immobile creatures to one another. Much like the above-ground biological interactions, the underground ecosystem is diverse, and not only houses many animals, but also consists of roots of different plants, bacteria, and fungal mycelium. The plant roots interact with their immediate neighbors, but in order for plants to communicate with plants further away from them, they rely on the underground fungal network, or according to Dr. Suzanne Simard who popularized the idea, the “Wood Wide Web” (WWW).
What is the underground “Wood Wide Web”, and how is it built?
This communication network is not made up of invisible radio waves like our Wifi, but rather relies on a minuscule and dense fungi network to deliver various signals and information [6]. These fungi, using their branching and arm-like membranes, build a communication network called the mycelium that connects between individual plants, and even the whole ecosystem. The mycelium deliver nutrients, sugar, and water, and in a more complex dynamic with the plants, deliver chemical signals. The fungi’s ability to expand their mycelium through reproduction and growth of fungi individually helps build these connections within the network. To expand their mycelium and link the network together with different individual plants, it must be evolutionarily advantageous for the fungi species to create such an extensive system. That is where the plant roots and their cooperative interactions come into play.
This communication network builds upon the foundation of mutualistic relationships between plants and fungi called mycorrhizae. Mutualism is the relationship that allows plants to provide sugars for the fungi in exchange for limiting nutrients such as phosphorus, nitrogen, and sometimes water (figure 1). According to an article published by Fleming, around 80-90% of the earth’s vascular plants have this mutualistic relationship, which allows plants and fungi to connect with one another through the plant roots. Without the mutually beneficial relationship, the fungi are not obligated to expand their network to connect to the plant roots and “help” these plants deliver chemical signals.
Not only is there nutrient and informational exchange, but the plants benefit from fungi priming, where the initial fungi infection that creates the exchange interface between plant roots and fungi cells force the plant immune systems to increase immunity. The increased immunity in the infected plant indirectly increases the chances of the plants in resisting major changes such as a disease in the ecosystem [6]. This continuous plant-fungi network through nutrient exchange and strengthening each species’ survival connects the whole ecosystem together.
Figure 1: A simplified visual of species interactions within the fungal network.
(Source: BBC)
That being said, the plant and fungal species that make up the WWW can vary by the participants who built the ecosystems. The interaction also means that the plants can selectively provide carbon or release defense chemicals to decide which fungus remains and has a mutualistic relationship with them [1]. When introducing a non-native species, it can alter the new ecosystem by encouraging different types of mycorrhiza. One such example was the introduction of European cheatgrass in Utah, U.S.A. The mycorrhizal makeup in Utah initially does not have significant changes prior to the introduction. However, upon European cheatgrass introduction to the Utahn site, despite the cheatgrass that does not contain European fungi, the site showed a shifted fungi genetic makeup [8]. Each plant individual, or species, using their preferences and abilities to “choose” their mutualistic partners, can diversify the fungal network to become more extensive and powerful, both to benefit and to harm other species of the ecosystem. The interspecies perspective is important in understanding the WWW.
Plants talk and interact through the “Wood Wide Web”
The communication extends to others in the ecosystem — the plants can “speak” to each other interspecifically, too. The individuals in an ecosystem are closely linked to one another, and so are relationships between plant individuals, whether it is directly with each other, indirectly through the fungal network, or both. The indirect communication relies on the fungal network, where various chemical signals pass through. For instance, the increased phosphorus level in the soil signals other plants that there is a plant-fungal interaction, and they may respond to this signal in different ways to ensure the situation is to their advantage — they could try to have their share of nutrients by producing sugars to attract these types of fungi, or they could make their plant competitors less healthy by excreting chemicals to weaken the fungus’ abilities to provide nutrients [13]. The WWW provides an internet that allows the plants to select a variety of methods to interact with one another, near or far.
The plants can choose to actively help each other through this fungal network, and allow both individuals, or species, to thrive in the ecosystem. Evolutionarily speaking, a plant individual could benefit from their own kind to thrive for the benefit of their survival. When an individual plant is thriving and producing excess carbon, they can help other plants by transferring excess nutrients through the fungal network [6]. An older, dying tree can also choose to transfer its resources to the younger neighbors through the fungal network, or donate its stored nutrients to the entire ecosystem through the decaying process that is aided by the extensive fungal network from fungal hyphae growth over the material [5]. Furthermore, through the WWW, the plants are able to communicate with one another about the possible threats including herbivores and parasitic fungi. In the research of Song et al, tomato plants infected with pathogens are able to send various defensive chemical signals, such as enzymes, into the existing fungal network for healthy neighbors in the network, warning them of the dangers nearby before they are infected themselves; using this mechanism, the plants can concentrate defensive chemicals with neighbors to minimize the spreading of this parasitic fungus in the area.
Not only can plants benefit one another, they can also use this network to put others at a disadvantage, such as to wipe out another competing or predating species that threaten their own survival. Allelopathy, or the exuding of chemicals to ward off enemies, usually gives off the impression that the plants use this method to discourage herbivores from consuming them, such as the milky sap that causes skin rashes and inflammation when a cucumber vine is cut, but the allelopathy is also active underground through the WWW. Barto et al, through their research on allelopathy, shows that even within a disturbed habitat, when there is competition between plant species, one species may utilize the regional network of fungi attached to them to deliver allelochemicals from one plant species to its neighboring species, preserving the fitness of their own kind.
Passive Animals, Active Plant and Fungi
We always think of herbivores as active players impacting the ecosystem. In the WWW, they are the last to respond to changes. Plants and fungi signal each other when an herbivore is present in the network, well before it has established its presence in the neighboring plants. Fungi are an important and active part of this ecosystem because they can also choose to exclude herbivores through chemical allelopathy. While it is possible that the fungi can choose to colonize a separate species that provides more benefits for them, they can concentrate their energy on defending its current host. Before the herbivore can expand its population, the plants have already communicated with one another through excretion of allelopathic chemicals, not only to ward off the herbivores that are causing potential damage, but also to warn other plants of the herbivore presence [1]. The fungi colonization of two nightshade species, Solanum ptycanthum and Solanum dulcamara, showed an increase of defense protein levels against the feeding caterpillars. This is just one example of herbivory defense mechanisms that results in decreased predator fitness, specifically in reduced growth rate and feeding rates [11]. When caterpillars feed on the Solanum spp., the active players in this relationship, the fungi and their plant hosts use chemical defense mechanisms indirectly induced by the fungi to discourage herbivores from feeding, and through evolution, eventually they drive out predators who are disadvantageous to the fungi-plant fitness.
Alone without the Wood Wide Web: Human Impacts
The network is built on a web of hyphae connections that is barely visible to the human eye, and even more vulnerable to changes. Older ecosystems not only have a higher percentage of larger trees with broader root systems, but are also denser in number, which both lead to a more extensive mycorrhizal fungal network. The species diversity, on top of age and density, contributes to a complex and healthy WWW that supports all plants in the ecosystem [3]. However, a disturbed ecosystem severs the connections in this network, making the previously extensive system difficult to repair.
Human activities that disturb the soil can affect this fragile yet powerful connection: seasonal tilling in agriculture, intensive logging, and change of soil chemistry and structure by laying concrete inhibits the soil from building an extensive web. Physically turning and chemically altering the soil is a direct human impact that cuts off hyphae connections between plant individuals in the system. According to Dr. Simard’s statement in a Biohabitat interview, the urban plants are less healthy because they lack the WWW to help them thrive through nutrient, water, and chemical signal exchange; they must do all those things or rely on humans to provide these needs. Indirectly, the larger-scale deaths and removal of plant individuals from logging also no longer foster a healthy mutualistic relationship between plants and fungi. If an individual plant is prevented from connecting with its mutualistic partners, whether that is through disturbance in the soil or the death of these partners, prevent the extensiveness of the WWW, and the isolation makes the urban tree population vulnerable to diseases if the humans do not diligently maintain them.
It is true that the smaller versions of the WWW still develop between periods of disturbance, proven indirectly by the fungal colonization ability in off-site lab experiments such as those included in the studies of Barto et al and Hawkes et al. However, important interspecies collaborations and perhaps lack thereof that is missing compared to a minimally disturbed habitat functions much better in resisting climate change and increased foreign and invasive species that threaten the health of the ecosystem.
Fortunately, despite the growing demand of land produced by economic growth and population, there is an increased awareness of the importance of plants and the health of the ecosystem. Over the last two decades, the addition of policies and practices indicated that major western conservation agencies have started to take on an interspecies perspective. One notable example is the inclusion of ecosystem management in the Clean Water Act, which adapts to the notion that endangered flora or fauna species is dependent on the health of an ecosystem [14]. The increased understanding of how interconnected the flora species are, in addition to conservation methods that have existed before the western colonization, have changed how governments aim to preserve nature.
Regardless of the level of human impacts, the WWW holds important communication between plants, fungi, and herbivores through chemical signals and nutrient exchange to sustain or to outcompete each other. The connectivity to relay information within this network is key to the healthy plant community, and further the health of the ecosystem. Next time when you walk your dog in the woods, remember that the plants around you are capable of communicating thanks to this underground network. In order to keep this forest healthy for generations on, it is up to us to rethink development strategies to preserve this network that helps them thrive to continue the species’ communication in the WWW in this forest.
References
- Biere, Arjen, Hamida B. Marak, and Jos MM van Damme. “Plant chemical defense against herbivores and pathogens: generalized defense or trade-offs?.” Oecologia 140.3 (2004): 430-441.
- Barto, E. Kathryn, et al. “The fungal fast lane: common mycorrhizal networks extend bioactive zones of allelochemicals in soils.” PLoS One 6.11 (2011): e27195.
- Beiler, Kevin J., et al. “Architecture of the wood‐wide web: Rhizopogon spp. genes link multiple Douglas‐fir cohorts.” New Phytologist 185.2 (2010): 543-553.
- Belnap, Jayne, and Susan L. Phillips. “Soil biota in an ungrazed grassland: response to annual grass (Bromus tectorum) invasion.” Ecological applications 11.5 (2001): 1261-1275.
- Biohabitats. “Expert Q&A: Suzanne Simard.” Biohabitats Newsletter 14.4 (2016).
- Fleming, Nic. “Plants talk to each other through a network of fungus.” BBC Earth. (2014).
- Gehring, Catherine, and Alison Bennett. “Mycorrhizal fungal–plant–insect interactions: the importance of a community approach.” Environmental entomology 38.1 (2009): 93-102.
- Hawkes, Christine V., et al. “Arbuscular mycorrhizal assemblages in native plant roots change in the presence of invasive exotic grasses.” Plant and Soil 281.1-2 (2006): 369-380.
- Hawkes, Christine V., et al. “Plant invasion alters nitrogen cycling by modifying the soil nitrifying community.” Ecology letters 8.9 (2005): 976-985.
- Macfarlane, Robert. “The Secrets of the Wood Wide Web.” The New York Times. (2016).
- Minton, Michelle M., Nicholas A. Barber, and Lindsey L. Gordon. “Effects of arbuscular mycorrhizal fungi on herbivory defense in two Solanum (Solanaceae) species.” Plant Ecology and Evolution 149.2 (2016): 157-164.
- Song, Yuan Yuan, et al. “Interplant communication of tomato plants through underground common mycorrhizal networks.” PloS one 5.10 (2010): e13324.
- Van der Putten, Wim H. “Impacts of soil microbial communities on exotic plant invasions.” Trends in Ecology & Evolution 25.9 (2010): 512-519.
- Doremus, H., Tarlock, A. Dan. “Can the Clean Water Act Succeed as an Ecosystem Protection Law?” George Washington Journal of Energy and Environmental Law 4 (2013): 49.
The History and Politics of Marijuana in the United States
By Vishwanath Prathikanti, Political Science, 23’
Author’s note: Marijuana today is a very controversial topic, with some arguing for a complete criminalization of it, others advocating for complete decriminalization of it, and many more in between. To understand marijuana today, and what it does to your body, we need to unravel its complex history and proven effects. Unfortunately, this is not always clear cut.
Marijuana, according to the National Institute on Drug Abuse (NIDA) is defined as “a greenish-gray mixture of the dried flowers of Cannabis sativa” and all other relatives, including Cannabis indica, Cannabis ruderalis, and hybrids [1]. The mind-altering effects of marijuana are mainly due to the chemical delta-9-tetrahydrocannabinol, commonly known as THC, and it is the active ingredient that makes marijuana so dangerous according to NIDA.
The reason for this mind-altering effect is due to the structural similarities between THC and anandamide, a naturally occurring cannabinoid that functions as a neurotransmitter. Anandamide is an agonist, meaning it is a chemical that binds to receptors in the human endocannabinoid system and causes a response. The endocannabinoid system is a network of receptors designed to maintain bodily homeostasis, or stable conditions for the body [13]. Anandamide activates receptors which send chemical messages between nerve cells throughout the nervous system, specifically to areas in the brain “that influence pleasure, memory, thinking, concentration, movement, coordination, and sensory and time perception” [1]. Mainly, the cerebellum, basal ganglia, and hippocampus areas are being activated the most. Due to the effects of pleasure, researchers have speculated that anandamide could be released in the early phases of brain development to strengthen positive reactions towards food consumption, essentially encouraging people to eat, although they cannot confirm it [2]. Regardless, due to its similarity to anandamide, THC is able to attach cannabinoid receptors on neurons in these brain areas and activate them. These receptors are usually activated by anandamide, but when they are activated by THC, various mental and physical functions become disrupted, such as cognitive function and balance, and various mental effects start to take place as well. These effects most commonly represent themselves as “pleasant euphoria” and other times “heightened sensory perception (e.g., brighter colors), laughter, altered perception of time, and increased appetite” [1].
Repeated interference of the endocannabinoid system due to THC can lead to various problems in areas of the brain that enable complex thinking, balance, and reaction time. THC has also been shown to have more permanent effects on developing brains, such as impairing specific learning and memory tasks, discussed in one study involving rats. Gleason et al. targeted cannabis receptor 1 (CB1), an endocannabinoid receptor that is prevalent “in the cortex, hippocampus and striatum” and is activated by THC in both mice and humans [3]. Gleason et al. administered a CB1 agonist during adolescence in one group, adulthood in another, and proceeded to observe brain development. They found that the adolescents developed long-term hippocampal learning and memory deficits, specifically manifesting in hippocampal long-term depression. Hippocampal depression is the process of reducing synaptic efficacy in the hippocampus due to a patterned behavior, in this case marijuana usage, resulting in the process of learning new information to become more difficult [17]. The adult rats in this study did not show signs of these changes. When it comes to humans though, studies show conflicting results.
Most of the consistently observed damage done to the human brain is within the prefrontal cortex area, due to the large volume of CB1 receptors. One study that observed adults who used marijuana four times per week compared to adults of a similar demographic that did not use marijuana found that the frequent use resulted in lower volumes of orbitofrontal cortex (OFC) gray matter [10]. The OFC is tied to various aspects of decision making and is widely believed to deal with emotional and sensory details linked to decisions. The study explains that the OFC “is enriched with CB1 receptors, and is highly implicated in addictive behaviors such as those related to disruptions in motivation” [10]. The consequences of losing gray matter are not only linked to suboptimal decision-making, but in order to compensate for the loss of volume, the brain builds more connections in the OFC. Due to the higher amount of connections, the brain requires more sustenance, particularly glucose, in order to function normally.
Despite the existence of marijuana in the world for centuries, marijuana research is still in its infancy. This is because the federal Drug Enforcement Agency (DEA) lists marijuana as a Schedule I drug, meaning “it has a high potential for abuse, no currently accepted medical use in treatment in the United States, and a lack of accepted safety for use under medical supervision” [5]. To put things into perspective, heroin, ecstasy and methamphetamine are also listed as Schedule I drugs. To study the effect on THC, researchers not only need a special permit, but they need to study individuals who are already consuming THC on a regular basis. This is a practice used for all Schedule I drugs; scientists are unable to administer heroin, methamphetamine or ecstasy to individuals in any research projects.
While THC has been shown to have various adverse effects on the brain, there is another component in marijuana that has resulted in various positive effects, cannabidiol (CBD). CBD is the second most prevalent ingredient in marijuana, and is an essential component in medical marijuana. While THC is the main psychoactive ingredient in cannabis, responsible for physical and mental disorientation, CBD is responsible for the sense of anxiety relief [4]. Until 2018, CBD was also considered a Schedule I drug, but it has since been removed due to its benignity, meaning anyone can legally buy and sell CBD in the US as long as it is derived from hemp [4]. Hemp is another plant in the cannabis family, but has different physical and chemical properties that separate it from marijuana. Hemp is typically characterized by its sturdy stalks used in textiles as well as a low concentration of THC [16]. In particular, CBD has been essential in treating “some of the cruelest childhood epilepsy syndromes, such as Dravet syndrome and Lennox-Gastaut syndrome (LGS), which typically don’t respond to antiseizure medications” [4]. There is evidence CBD helps with numerous mental disorders, including anxiety, insomnia and chronic pain, but further research investigating side effects is required.
Returning to the hippocampus region, Demirakca et al. observed hippocampal volume reductions in users who consumed cannabis that was higher in THC content than CBD content. They concluded that there was an inverse relationship between gray matter (GM) volume and THC/CBD ratio, meaning the more THC is in a product, the more likely it is that the user will have reduced GM volume [19]. However, study spanning three years conducted by Koenders et al. found that there was no relevant correlation between cannabis usage in young adults and reduced GM volumes in multiple regions of the brain, including the hippocampus [19].
CBD in particular, has been shown to have negative effects on the amygdala, whereas THC has not shown an effect. The amygdala is popularly known as being the collection of nuclei that control your sense of fear, linking it directly to a potential effect of THC usage, anxiety [18]. Surprisingly, a link between THC and the amygdala has been disproven by multiple papers. However, CBD does affect the amygdala. One study by Rocchetti et al. in 2013 and another by Pagliaccio et al. in 2015 disproved previously held beliefs linking marijuana and amygdala damage due to size. Rocchetti et al. conducted a smaller study and proved publication bias in the preceding amygdala research while Pagliaccio produced a wide study that found variation of amygdala size between marijuana users and nonusers fell within the range of normal variation [11, 14]. However, while amygdala size may not be affected, there is evidence that indicates its functionality is decreased. A study done by Fusar-Poli et al. in 2010 found that CBD actually negatively affects the amygdala due to its anti-anxiety effects. Fusar-Poli et al. found that CBD weakens signals sent from the amygdala during fear-inducing situations, meaning that CBD usage could prompt a delayed or subdued reaction compared to normal [15].
This begs the question, if further research is required to understand the full effects of THC and CBD, why is marijuana considered such a charged political topic today? To understand marijuana today, it is necessary to examine its history, often motivated for political reasons rather than scientific ones.
The history of regulation and fear associated with marijuana dates back to 1930 when Harry Anslinger was appointed as the first commissioner of the Federal Bureau of Narcotics (FBN). Due to the great depression, the federal government needed to cut funding for various agencies, and at the time, the FBN mainly combated heroin and cocaine, drugs used by a relatively small number of people. Anslinger needed to find a more widely used narcotic that would get him the necessary funds to continue his war on drugs [6].
Anslinger decided that the drug would be cannabis, and latched on to a story of a man named Victor Licata who killed his family with an Axe after consuming cannabis. While there was no evidence that he consumed marijuana before the killing, newspapers quickly sensationalized the story, and Anslinger went onto various radio shows claiming that the drug could cause insanity. The name “marijuana” was actually given by Anslinger to associate the drug with latinos, and used racial tensions to insinuate that the drug made black and latino americans “forget their place in the fabric of society” [6]. Anslinger’s actions culminated in his testifying before congress in hearings regarding the Marihuana Tax Act of 1937 which effectively banned sales.
Before it was categorized as a Schedule I drug however, there were numerous committees that ruled against the illegalization of marijuana. After the Marihuana Tax Act of 1937, New York City mayor Fiorello LaGuardia assembled a special committee with the New York Academy of Medicine “to make a thorough sociological and scientific investigation” on the effects of marijuana. The committee completely disproved Aslinger’s claims of insanity, saying “the basic personality structure of the individual does not change,” meaning after the “high” has passed, the user is unchanged personality-wise. They also stated that marijuana “does not evoke responses which would be totally alien [in an] undrugged state,” meaning the consumption of marijuana would not cause an individual to act out of character [7]. The information provided in the LaGuardia Report is quite consistent with research found today, and helped contribute to the supreme court overturning the Marihuana Tax Act in 1969 with Leary vs United States.
However, while the act was overturned, it was replaced soon after with the Controlled Substances Act 1971 under the Nixon presidency which established Marijuana as a schedule I drug. This act was another racially motivated law, with Nixon’s domestic Policy Advisor later admitting that “the Nixon White House had two enemies: the antiwar left and black people. We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did” [8].
Nixon appointed a commission to investigate the effects of marijuana in 1972, and the report returned recommending the decriminalization of marijuana. The report argued that “Criminal law is too harsh a tool to apply to personal possession even in the effort to discourage use…It implies an overwhelming indictment of the behavior which we believe is not appropriate. The actual and potential harm of use of the drug is not great enough to justify intrusion by the criminal law into private behavior, a step which our society takes only with the greatest reluctance” [9]. In response to the report, Nixon not only refused to decriminalize marijuana, but created the DEA the next year, which would enforce the laws regarding marijuana as a Schedule I drug.
Because of the strict regulations on it today, marijuana research is still in its infancy. However, due to the aforementioned loosening of regulations on CBD research, marijuana is being pushed into the spotlight once again. This, coupled with marijuana’s past history and present realities of it being tied to members of the African American communities have made the marijuana debate into a hotbed for political discourse. Whether or not recreational marijuana becomes legal federally in the near future, it is clear that marijuana warrants further investigation in order to clear up the various inconsistencies surrounding its effects, both long-term and short-term, on the human body.
References
- National Institute on Drug Abuse research report series “Marijuana” https://www.drugabuse.gov/publications/research-reports/marijuana/what-marijuana
- Stephen V. Mahler, et. al “Endocannabinoid Hedonic Hotspot for Sensory Pleasure: Anandamide in Nucleus Accumbens Shell Enhances ‘Liking’ of a Sweet Reward.” Neuropsychopharmacology. 2007. https://www.nature.com/articles/1301376
- Kelly A. Gleason, et al. “Susceptibility of the adolescent brain to cannabinoids: long-term hippocampal effects and relevance to schizophrenia” Nature. 2012. https://www.nature.com/articles/tp2012122
- Peter Grinspoon, “Cannabidiol (CBD) — what we know and what we don’t” Harvard Health Blog, Harvard Health Publishing. August 24, 2018. https://www.health.harvard.edu/blog/cannabidiol-cbd-what-we-know-and-what-we-dont-2018082414476
- DEA “Drugs of Abuse (2017 edition)” 2017. https://www.dea.gov/sites/default/files/drug_of_abuse.pdf
- CBS News “The man behind the marijuana ban for all the wrong reasons” Nov. 17, 2016 https://www.cbsnews.com/news/harry-anslinger-the-man-behind-the-marijuana-ban/
- The LaGuardia Committee Report https://daggacouple.co.za/wp-content/uploads/1944/04/La-Guardia-report-1944.pdf
- Drug Policy Alliance “Top Adviser to Richard Nixon Admitted that ‘War on Drugs’ was Policy Tool to Go After Anti-War Protesters and Black People” http://www.drugpolicy.org/press-release/2016/03/top-adviser-richard-nixon-admitted-war-drugs-was-policy-tool-go-after-anti
- Schafer library of drug policy, “Marihuana: A Signal of Misunderstanding” http://www.druglibrary.org/schaffer/Library/studies/nc/ncmenu.htm
- Francesca M. Filbey, et al. “Long-term effects of marijuana use on the brain” PNAS. 2014. https://www.pnas.org/content/111/47/16913
- David Pagliaccio et al. “Shared Predisposition in the Association Between Cannabis Use and Subcortical Brain Structure” JAMA Psychiatry. October 2015. https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2429550
- Albert Batalla et al. “Structural and Functional Imaging Studies in Chronic Cannabis Users: A Systematic Review of Adolescent and Adult Findings” PLOS One. February 2013. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0055821
- Cannabis research initiative “Human endocannabinoid system” UCLA Health. https://www.uclahealth.org/cannabis/human-endocannabinoid-system
- Matteo Rochetti et al. “Is cannabis neurotoxic for the healthy brain? A meta‐analytical review of structural brain alterations in non‐psychotic users” Psychiatry and Clinical Neurosciences. Sept. 2013. https://onlinelibrary.wiley.com/doi/epdf/10.1111/pcn.12085
- Paolo Fusar-Poli et al. “Modulation of effective connectivity during emotional processing by Δ9-tetrahydrocannabinol and cannabidiol” International Journal of Neuropsychopharmacology. May 2010. https://academic.oup.com/ijnp/article/13/4/421/712253
- NIDA Blog. “What is Hemp?” National institute on drug abuse for teens. November 2015. https://teens.drugabuse.gov/blog/post/what-is-hemp
- Peter Massey et al. “Long-term depression: multiple forms and implications for brain function” Cell. April 2007. https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(07)00043-4?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0166223607000434%3Fshowall%3Dtrue
- Nicole Haloupek “What is the Amygdala?” Live Science. January 2020. https://www.livescience.com/amygdala.html
- Traute Demirakca et al. “Diminished gray matter in the hippocampus of cannabis users: Possible protective effects of cannabidiol” April 2011. https://www.sciencedirect.com/science/article/pii/S0376871610003364?via%3Dihub
- Laura Koenders et al. “Grey Matter Changes Associated with Heavy Cannabis Use: A Longitudinal sMRI Study” May 2016. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4880314/
Not All Heroes Wear Capes: How Algae Could Help Us Fight Climate Change
By Robert Polon, Biological Sciences Major, ’21
Author’s Note: In my UWP 102B class, we were assigned the task of constructing a literary review on any biology-related topic of our choice. A year ago, in my EVE 101 class, my professor briefly mentioned the idea that algae could be used to sequester atmospheric carbon dioxide in an attempt to slow the rate of climate change. I found this theory very interesting, and it resonated with me longer than most of the other subject matter in that class. I really enjoyed doing the research for this paper, and I hope it gives people some hope for the future. I’d like to thank my UWP professor, Kathie Gossett, for pointing me in the right direction throughout the process of writing this paper.
Abstract
With climate change growing ever more relevant in our daily lives, scientists are working hard to find solutions to slow and reverse the damage that humans are doing to the planet. Algae-based carbon sequestration methods are a viable solution to this problem. Photosynthesis allows algae to remove carbon dioxide from the atmosphere and turn it into biomass and oxygen. It has even been proposed that raw algal biomass can be harvested and used as a biofuel, which can provide a greener alternative to fossil fuel usage. Though technology is not yet developed enough to make this change in our primary fuel source, incremental progress can be taken to slowly integrate algal biofuel into daily life. Further research and innovation on the subject could allow full-scale replacement of fossil fuels with algal biofuel to be a feasible option. Methods of algal cultivation include open-ocean algal blooms, photo bioreactors, algal turf scrubbing, and BICCAPS (bicarbonate-based integrated carbon capture and algae production system). There are many pros and cons to each method, but open-ocean algal blooms tend to be the most popular because they are the most economical and produce the most algae, even though they are the most harmful to the environment.
Keywords
Algae | Biofuel | Climate Change | Carbon Sequestration
Introduction
As we get further into the 21st century, climate change becomes less of a theory and more of a reality. Astronomically high post-Industrial Revolution rates of greenhouse gas emissions have started to catch up with humans, as the initial consequences of these actions are now coming to light with fear that worse is on the way. Many solutions have been proposed to decrease greenhouse gas emissions, but very few involve fixing the damage that has already been done. It has been proposed that growing algae in large quantities could help solve this climate crisis.
According to the Environmental Protection Agency, 76% of greenhouse gas emissions come in the form of carbon dioxide. As algae grows, it removes carbon dioxide from the atmosphere by converting it to biomass and oxygen via photosynthesis. Algae convert carbon dioxide to biomass at relatively fast rates. On average, one kilogram of algae utilizes 1.87 kilograms of CO2 daily, which means that one acre of algae utilizes approximately 2.7 tons of CO2 per day [1]. For comparison, one acre of a 25-year-old maple beech-birch forest only utilizes 2.18 kilograms of CO2 per day [2]. This amount of carbon dioxide sequestration can be done by only 1.17 kilograms of algae. After its photosynthetic purpose has come to an end, the raw algal biomass can be harvested and used as an environmentally-friendly biofuel. This literary review will serve as a comprehensive overview of the literature on this proposal to use algae as a primary combatant against global warming.
Carbon Dioxide
For centuries, heavy usage of fossil fuels has tarnished Earth’s atmosphere with the addition of greenhouse gases [3]. These gases trap heat by absorbing infrared radiation that would otherwise leave Earth’s atmosphere. This increases the overall temperature of the earth, which leads to the melting of polar ice caps, rising sea levels, and strengthening of tropical storm systems, among many other devastating environmental effects [4]. The most commonly emitted greenhouse gas, carbon dioxide, tends to be the primary focus of global warming treatments.
These algal treatment methods are no different. Any algal treatment option is dependent upon the fact that algae sequester atmospheric carbon dioxide through photosynthesis. It converts carbon dioxide into biomass and releases oxygen into the atmosphere as a product of the photosynthetic process [5].
Algal Cultivation
There are four proposed methods of algal cultivation: open-ocean algal blooms, photobioreactors, algal turf scrubbing, and BICCAPS. These techniques all differ greatly, with various benefits and drawbacks to each.
Open-Ocean Algal Blooms
Algae is most abundant on the surface of the open ocean. With the addition of its limiting nutrient, iron, in the form of iron(ii) sulfate (FeSO4), massive algal blooms can be easily sparked anywhere in the ocean [3]. This seems to be the way that most scientists envision sequestration because, of all proposed cultivation techniques, this one produces the most algae in the least amount of time. Intuitively, this method removes the most carbon dioxide from the atmosphere, as the amount of CO2 removed is directly proportional to the quantity of algae undergoing photosynthesis.
There are many benefits to open-ocean algal blooms. There is no shortage of space on the surface of the ocean, so, hypothetically, there is a seemingly infinite amount of algal mass that can be cultivated this way. This technique is also very cost-efficient, as all you need to employ it is some iron(ii) sulfate and nature will do the rest [3].
Once the algal bloom has grown to its maximum size, there is an overabundance of algal biomass on the surface of the ocean. Some researchers have proposed that this mass be collected and used as a biofuel [5,6,7]. Others have proposed that we let nature play its course and allow the dead algae to sink to the bottom of the ocean. This ensures that the carbon dioxide it has taken out of the atmosphere is stored safely at the bottom of the ocean [8]. Here, the algal biomass is easily accessible for consumption by shellfish, who store the carbon in their calcium carbonate shells [3].
This solution is not an easy one to deploy, however, because algal blooms bring many problems to the local ecosystems. Often referred to as harmful algal blooms (HABs), these rapidly growing algae clusters are devastating to the oceanic communities they touch. They increase acidity, lower temperature, and severely deplete oxygen levels in waters they grow in [9]. Most lifeforms aren’t prepared to handle environmental changes that push them out of their niches, so it’s easy to see why HABs kill significant portions of marine life.
HABs can affect humans as well. Many species of alga are toxic to us, and ingestion of contaminated fish or water from areas affected by these blooms can lead to extreme sickness and even death. Some examples of these diseases are ciguatera fish poisoning, paralytic shellfish poisoning, neurotoxic shellfish poisoning, amnesic shellfish poisoning, and diarrheic shellfish poisoning [10]. The effects of harmful algal blooms have only been studied in the short-term, but from what we have seen, they are definitely a barrier in using this form of algae cultivation [11].
Photobioreactors
Photobioreactors are another frequently-proposed tool for cultivating algae. These artificial growth chambers have controlled temperature, pH, and nutrient levels that make for optimal growth rates of algae [12]. They can also run off of wastewater that is not suitable for human consumption. Photobioreactors minimize evaporation and, with the addition of iron, magnesium, and vitamins, increase rates of carbon dioxide capture are increased [1]. Due to the high concentration of algae in a relatively small space, photobioreactors have the highest rates of photosynthesis (and subsequently carbon dioxide intake) out of all of the cultivation methods mentioned in this paper.
This innovative technology was driven primarily by the need to come up with an alternative to triggering open-ocean algal blooms. Photobioreactors eliminate pollution and water contamination risks that are prevalent in harmful algal blooms. Furthermore, they make raw algal biomass easily accessible for collection and use as a biofuel, which open-ocean algal blooms do not [12].
The main drawback to this method is that the cost of building and maintaining photobioreactors is simply too high to be economically feasible right now [12]. Technological developments need to be made to lower the short-term cost of operation and allow for mass production if we want to use them as a primary source of carbon sequestration. Their long-term economic feasibility still remains unknown, as most of the cost is endured during the production of the photobioreactors. Money is made back through the algae cultivated, but the technology hasn’t been around long enough to show concrete long-term cost-benefit analyses without speculation [14].
Algal Turf Scrubbing (ATS)
Proposed in 2018 by botanist Walter Adey, algal turf scrubbing (ATS) is a new technique created to efficiently cultivate algae for use in the agriculture and biofuel industries. The process involves using miniature wave generators to slightly disturb the flat surface of a floway and stimulate the growth of numerous algal species in the water. Biodiversity in these floways increases over time, and a typical ATS floway will eventually have over 100 different algae species [11].
Heavy metals and other toxic pollutants occasionally make their way into the floways; however, they are promptly removed, to ensure that the product is as nontoxic as possible. The algal biomass is harvested bi-weekly and has a variety of uses. Less toxic harvests can be used as fertilizers in the agricultural industry, which research claims is the most economically efficient use for the harvest. It can also go towards biofuel use, although the creators of the ATS system believe the majority of their product will go towards agricultural use because they will not be able to produce enough algae to keep up with the demand (if our society moves towards using it as a biofuel) [11].
The problems with ATS are not technological, but sociopolitical, as the research team behind it fears that they will not get the funding and resources needed to perform their cultivation at an effective level [11].
BICCAPS
The bicarbonate-based integrated carbon capture and algae production system (BICCAPS) was proposed to reduce the high costs of commercial algal biomass production by recycling bicarbonate that is created when algae capture carbon dioxide from the atmosphere and using it to culture alkalihalophilic microalgae (algae that thrive in a very basic pH above 8.5). Through this ability to culture more algae, the system should, in theory, cut costs of carbon capture and microalgal culture. It is also very sustainable, as it recycles nutrients and minimizes water usage. The algae cultivated can also be turned into biofuel to lower fossil fuel usage [13].
The main drawback to this closed-loop system is that it does not cultivate as much algae as the other systems, though work is currently being done to improve this. It has been proven that the efficiency of BICCAPS significantly improves with the addition of sodium to the water, which stimulates the growth of alkalihalophilic microalgae [13]. This means that, with a little bit of improvement to the efficiency of the system, BICCAPS could become a primary algal biomass production strategy because of its low cost and sustainability.
Use of Algae as a Biofuel
While algae may not have the energetic efficiency of fossil fuels, it is not far behind. It can be burned as a biofuel to power transportation, which would allow us to lower our use of fossil fuels and, subsequently, our greenhouse gas emissions. When dry algal biomass is burned, it releases more oxygen and less carbon dioxide than our current fuel sources. The increase in oxygen released into the atmosphere not only helps to lower CO2 emissions but increases the overall atmospheric ratio of oxygen to carbon dioxide. More research still has to be done to find the best possible blend of algal species for fuel consumption [12]. Solely using algae as a biofuel would not meet the world’s energy demand, but the technology for photobioreactors continues to improve, giving hope to one day use algae more than fossil fuels [6].
A common counterargument to proposals for algal biofuel usage is that burning dry algae only provides half the caloric value of a similarly-sized piece of coal. While this is true, it should be taken into consideration that that coal has an extraordinarily high caloric value and that the caloric value of algae is still considered high relative to alternative options [3].
It is often suggested that bioethanol, which essentially uses crops as fuel, should be used over algal biofuel. The main problem with this proposal is that farmers would spend more time cultivating inedible crops because they make for better fuel. This would lead to food shortages on top of the current hunger problem in our world. Farming crops also take up land, while growing algae does not [7].
Drawbacks
The main problems associated with using algae as a biofuel are technological and economical. We simply do not have the technology in place right now to produce enough algae to completely replace fossil fuels with it. In order to do this, we would have to establish full-scale production plants, which is not as economically viable as simply continuing to use the fossil fuels that degrade our planet [12]. Receiving funding for the commercialization of algae is the biggest obstacle this plan faces. It’s difficult to get money allocated to environmental conservation efforts because, unfortunately, it doesn’t rank very highly in our government’s priorities. Algal carbon sequestration has also never been observed at a commercial scale, so there is hesitation to fully commit resources to something that seems like a gamble.
Alternative Uses
It has also been proposed that algal biomass grown to sequester carbon dioxide should be used in the agricultural industry. As previously mentioned, the creators of ATS have suggested using it as a fertilizer [11]. Others say that it can be used to feed livestock or humans, as some cultures actually consume algae already [12]. The seemingly infinite supply of microbes can also be harvested and used heavily in the medical industry in the form of antimicrobial, antiviral, anti-inflammatory, anti-cancer, and antioxidant treatments [7].
Conclusion
Algae can be used to fight climate change because it removes carbon dioxide from our atmosphere, stores it as biomass, and replaces it with oxygen. Arguments have been made in many directions over the best method of algal cultivation. Triggering open-ocean algal blooms is certainly the most cost-efficient of these methods, and it produces the most algal biomass. The problem with using this technique is that these algal blooms have devastating ecological effects on the biological communities they come in contact with. Photobioreactors are another popular method among those who favor this strategy because of their ability to efficiently produce large quantities of algae; however, the main inhibition to their usage is the extremely high cost of construction and operation. With more focus on developing lower cost photobioreactors, they can potentially become the primary source of algal growth. Algal turf scrubbing is another strategy of algae cultivation that struggles with the problem of acquiring adequate funding for the operation. BICCAPS is a relatively inexpensive and eco-friendly way to grow algae in a closed system, but it yields low quantities of algal biomass compared to the other systems.
The raw algal biomass from these growth methods can potentially be used as a biofuel. Dry alga has a high caloric value, which makes it great for burning to power equipment. It does not burn as well as fossil fuels, but it does release more oxygen and less carbon dioxide than fossil fuels when burned. Of course, funding will be needed for increased algae production to make this a possibility, but with more research and advances in the field, algal growth would be a great way to remove large amounts of carbon dioxide that is stuck in Earth’s atmosphere and become our primary fuel source down the line.
References
- Anguselvi V, Masto R, Mukherjee A, Singh P. CO2 Capture for Industries by Algae. IntechOpen. 2019 May 29.
- Toochi EC. Carbon sequestration: how much can forestry sequester CO2? MedCrave. 2018;2(3):148–150.
- Haoyang C. Algae-Based Carbon Sequestration. IOP Conf. Series: Earth and Environmental Science. 2018 Nov 1. doi:10.1088/1755-1315/120/1/012011
- Climate Science Special Report.
- Nath A, Tiwari P, Rai A, Sundaram S. Evaluation of carbon capture in competent microalgal consortium for enhanced biomass, lipid, and carbohydrate production. 3 Biotech. 2019 Oct 3.
- Ghosh A, Kiran B. Carbon Concentration in Algae: Reducing CO2 From Exhaust Gas. Trends in Biotechnology. 2017 May 3:806–808.
- Kumar A, Kaushal S, Saraf S, Singh J. Microbial bio-fuels: a solution to carbon emissions and energy crisis. Frontiers in Bioscience. 2018 Jun 1:1789–1802.
- Moreira D, Pires JCM. Atmospheric CO2 capture by algae: Negative carbon dioxide emission path. Bioresource Technology. 2016 Oct 10:371–379.
- Wells ML, Trainer VL, Smayda TJ, Karlson BSO, Trick CG. Harmful algal blooms and climate change: Learning from the past and present to forecast the future. Harmful Algae. 2015;49:68–93.
- Grattan L, Holobaugh S, Morris J. Harmful algal blooms and public health. Harmful Algae. 2016;57:2–8.
- Calahan D, Osenbaugh E, Adey W. Expanded algal cultivation can reverse key planetary boundary transgressions. Heliyon. 2018;4(2).
- Adeniyi O, Azimov U, Burluka A. Algae biofuel: Current status and future applications. Renewable and Sustainable Energy Reviews. 2018;90:316–335.
- Zhu C, Zhang R, Chen L, Chi Z. A recycling culture of Neochloris oleoabundans in a bicarbonate-based integrated carbon capture and algae production system with harvesting by auto-flocculation. Biotechnology for Biofuels. 2018 Jul 24.
- Richardson JW, Johnson MD, Zhang X, Zemke P, Chen W, Hu Q. A financial assessment of two alternative cultivation systems and their contributions to algae biofuel economic viability. Algal Research. 2014;4:96–104.
Environmental Effects of Habitable Worlds on Protein Stability
By Ana Menchaca, Biochemistry and Molecular Biology ‘20
Author’s Note: As a biochemistry major hoping to further pursue an academic career in astrobiological research, this paper jumped out at me when finding a topic for a class assignment. It goes to show just how many paths there are to take in investigating life elsewhere in the universe and how much we still have yet to discover and understand.
The search for life elsewhere is a vast, challenging undertaking, and investigation of conditions on so deemed habitable worlds provides insight into our current understanding of the existence of life. The conditions for a world to be considered potentially habitable are similar to those of life on Earth. These conditions include a source of energy, common essential elements that make up life (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), and a solvent for chemical interactions (e.g. H2O). Understanding how chemicals and molecular components might interact with these environments can provide us with a better understanding of what could actually hold the potential for life as we currently know it. One of these important molecular components is proteins.
Research exploring protein stability on Saturn’s largest moon, Titan, published in November 2019, presents an early foray into these considerations. This research studies the implications of variable environmental conditions on protein interactions and how this could detect potentially habitable worlds, similar to Earth. The study makes a foray into studying whether the conditions deemed necessary on Earth are really necessary for the survival of proteins. Molecular dynamics simulations explored structural interactions based on our current knowledge of protein interactions, using the software package GROMACS [1].
Titan is one of these potentially habitable candidates in our solar system, a category that also includes several other moons: Europa, Enceladus, Ganymede, and Callisto. All of these moons are considered to have subsurface oceans present, conditions that have the potential for the presence of chemical building blocks, liquid H2O, and sources of energy. Models show that Titan’s subsurface oceans may contain hydrogen, carbon, nitrogen, and ammonia (ensuring that the oceans remain liquid), thus indicating the potential for Earth-like biochemistry [1].
Martin et al. explored the potential effects of Titan’s hypothesized environment on the integrity of biologically relevant molecules. Protein compactness, flexibility, and backbone dihedral angle distributions were measured. Because protein folding is affected by affinity and electrical interactions with the environment that they’re in, the difference in the conditions of Titan’s high-pressure subsurface oceans to those on Earth has the potential to affect the folding and behavior of relevant proteins [1]. Even on Earth, extremophiles in hydrothermal vents have varying versions of common proteins, something that could indicate the potential for novel protein conformations that perform similar functions in unique, more extreme conditions than those on Earth. The potential existence of these conformers, which still act and provide the same structures and functions of Earth proteins, broadens and changes our scope of what’s necessary and indicative of life.
In comparing Titan-like conditions to Earth, Martin et al. observed variations in the behavior of selected proteins (which were selected to highlight common folding of alpha helices and beta sheets). Proteins are formed at various levels of structure: primary, secondary and tertiary. The primary structure is the amino acid sequence, secondary structure is created by interactions of the polypeptide with itself, and tertiary structure is the proteins three dimensional, overall folding. Alpha helices and beta sheets are the most prevalent secondary structures for proteins on Earth, and the complex interactions and stability of these structures drives many biochemical interactions.
The Root-mean-square fluctuation, or how much the atoms fluctuated about their average position, of the proteins in Titan-like conditions was lower on average than that of the proteins on Earth, indicating less variability in structure in these conditions. Additionally, for one of the proteins, rather than not stabilizing into a specific secondary structure like on Earth, it instead settled into a pi helix conformation, a secondary structure that’s uncommon on Earth, as it is less stable [1]. Due to this lack of stability compared to alpha helices, they are typically found near functional sites.These varying secondary structures of proteins affect their ability to interact with other molecules and enzymes in complex ways, something that in the case of pi helices is less explored given their relative rarity on Earth.
These results show that while beta-sheets show similar behavior and presence in Titan-like conditions as they do on Earth, there’s also a tendency towards less common conformations (pi helices). These results expose both this variation of protein conformation and shape in differing conditions, and the survivability of proteins in non-Earth environments. This shows the possibility of discovering life in forms we are unfamiliar with, while also proving proteins, a vital component of life, are capable of existing in extraterrestrial environments. This research helps prove that those planets deemed habitable really are such, and the further study of the specific conformations and interactions of these proteins could provide us with more specific knowledge of what we might identify elsewhere. While this research is an early exploration of potential conditions on Titan and potentially other bodies with subsurface oceans, it still opens the door for further studies of environmental effects on known life, thus expanding our understanding of the potential for life to exist elsewhere.
Sources
- Martin, Kyle P., Shannon M. Mackenzie, Jason W. Barnes, and F. Marty Ytreberg. “Protein Stability in Titans Subsurface Water Ocean.” Astrobiology 20, no. 2 (January 2020): 190–98. https://doi.org/10.1089/ast.2018.1972.
- Abrevaya, Ximena C., Rika Anderson, Giada Arney, Dimitra Atri, Armando Azúa-Bustos, Jeff S. Bowman, William J. Brazelton, et al. “The Astrobiology Primer v2.0.” Astrobiology 16, no. 8 (January 2016): 561–653. https://doi.org/10.1089/ast.2015.1460.
Where the Bison Roam and the Dung Beetles Roll: How American Bison, Dung Beetles, and Prescribed Fires are Bringing Grasslands Back
By John Liu, Wildlife, Fish, and Conservation Biology ‘21
Author’s Note: In this article, I will explore the overwhelming impact that the teeny tiny dung beetles have on American grasslands. Dung beetles, along with reintroduced bison and prescribed fires, are stomping, rolling, and burning through the landscape; all in efforts to revive destroyed grassland habitats. Barber et. al. looks at how the beetles are reacting to the bison herds and prescribed fires. Seemingly unrelated factors interact with each other closely, producing results that bring hope to one of the most threatened habitats.
Watching grass grow: its lit
Grasslands are quiet from afar, often characterized by windblown tallgrasses and peaking prairie dogs. But in fact, they are dynamic. Historically, grasslands were constantly changing: fires ripping through the landscape, bison stampedes kicking up dust, and grasses changing colors by the season [2]. However, climate change, increasing human populations, and agricultural conversions all contribute to an increasing loss of critical habitats; grasslands being amongst the most affected [7]. A loss of grasslands not only results in the extermination of previously residing fauna, but also a reduction of ecosystem services that they once provided. Thus, it is of increasing concern to restore grassland habitats. With the help of bison, dung beetles, and prescribed fires, recovery of grasslands is promising and likely swift.
Eat, poop, burn, repeat
As previously mentioned, grasslands thrive when continuously disturbed. The constant disturbance keeps woody vegetation from encroaching, nonnative plants from invading, and biodiversity from declining as a result of competitive exclusion between species [12]. To accomplish this, grasslands rely on large herbivore grazers such as American bison (Bison bison) to rip through the vegetation and fires to clear large areas of dry debris [9].
American bison are herbivore grazers- animals that feed on plant matter near the ground. The presence of these grazers alter available plant biomass, vegetation community structures, and soil conditions. This is the result of constant trampling, consuming, and digesting of the plant matter [9, 11]. Due to their valuable impact on the landscape, bison are considered keystone species- species that have an overwhelming, essential role in the success of an ecosystem [8]. Grasslands would look vastly different without bison walking, eating, and defecating on them [9].
But bison do not aimlessly roam the grasslands, eating anything they come across. They specifically target areas that have been recently burned. These scorched areas present themselves with new growth, higher in nutritional content [3, 5]. Historically, lightning strikes or intense summer heats caused these fires, driving the movement of grazers, but human intervention inhibits these natural occurrences. Instead, prescribed fires- planned, controlled burnings performed by humans- now mitigate the loss of natural fires, encouraging the bison’s selective foraging behaviors [4, 12]. Inciting bison to follow burned patches benefits the grasslands in more ways than one. First, this prevents overgrazing of any one particular area. By moving throughout the landscape, particular areas will reestablish while others are cleared by the bison. Second, the simple act of traversing large distances physically changes the landscape. Bison are large animals that travel in herds. When moving about the grasslands, they trample vegetation and compact the soil beneath their hoofs. Finally, grazing bison interrupt the process of competitive exclusion- limiting success as a result of competition for resources- amongst native plants. They indiscriminately consume vegetation in these areas, leaving little room for any one species of plant to out compete another [9].
The world is your toilet…. with dung beetles!
What goes in must come out, and bison are no exception to that rule. After digestion of the grasses they eat, bison leave behind a trail of dung and urine. The nitrogen rich waste feeds back into the ecosystem, offering valuable nutrients to the plants and soil-dwelling organisms alike [1]. But a recent study by Barber et. al. highlights a small, but critical component that ensures nutrient distribution is maximized in grasslands: the dung beetles (Scarabaeidae: Scarabaeinae and Aphodiinae, and Geotrupidae).
Dung beetles rely on the solid waste from their mammalian partners. The beetles eat, distribute, and even bury the dung; which helps with carbon sequestration [10]. They are found around the world- from the rainforests of Borneo to the grasslands of North America- and interact with each environment differently. In Borneo, dung beetles distribute seeds found in the waste of fruit loving Howler monkeys (Alouatta spps) [6]. While in North America, they spread nutrients found in the waste of grazing bison. They provide unique ecosystem functions- shattering of nutrient rich dung throughout vast landscapes. These attributes led to their increasing popularity in science research as a study taxon in recent years.
Figure 1: Grassland health is largely dependent on the interplay of multiple living and non-living elements. In 1.1, the area is dominated by woody vegetation and few grasses due to a lack of disturbance. In 1.2, the introduction of prescribed fires clears some woody vegetation, allowing grasses to compete. In 1.3, bison introduce nutrients into the landscape, increasing productivity. However, the distribution of dung is limited. In 1.4, the addition of dung beetles lead to better distribution of nutrients thus more productivity and species diversity.
Barber et. al. took a closer look to see how exactly dung beetles were reacting to bison grazing and prescribed fires blazing through their grassy fields. They found significant contributions from each; both noticeably directing the movement and influencing the abundance of these beetles. As the bison followed the flames, so did the beetles. The beetles’ dependence on the bison’s dung showed when researchers looked at beetle abundance in two key areas: those with bison and those without. There were significantly more beetles in areas with bison, likely feeding on their dung, scattering it, and burying it; all while simultaneously feeding the landscape. Prescribed fires also lead to increases in beetle abundance. Whether it be 1.5 years post-restoration or 30 years post-restoration, researchers consistently saw increases in beetle abundance when prescribed fires were performed. This further amplifies the importance of disturbances in grassland habitats, for ecosystem health but also for species richness.
And the grass keeps growing
The reintroduction of bison in the grasslands of America proved successful in rebuilding a lost habitat, with the help of dung beetles and prescribed fires. However, bison and dung beetles are just one of many examples of unlikely pairings rebuilding lost habitats. Although the large-scale ecological processes have been widely studied, species-to-species interactions are often overlooked. Continued surveys of the grasslands will reveal more about the interactions of contributing factors and their effects on each other and the habitat around them.
Citations
- Barber, Nicholas A., et al. “Initial Responses of Dung Beetle Communities to Bison Reintroduction in Restored and Remnant Tallgrass Prairie.” Natural Areas Journal, vol. 39, no. 4, 2019, p. 420., doi:10.3375/043.039.0405.
- Collins, Scott L., and Linda L. Wallace. Fire in North American Tallgrass Prairies. University of Oklahoma Press, 1990.
- Coppedge, B.R., and J.H. Shaw. 1998. Bison grazing patterns on seasonally burned tallgrass prairie. Journal of Range Management 51:258-264.
- Fuhlendorf, S.D., and D.M. Engle. 2004. Application of the fire–grazing interaction to restore a shifting mosaic on tallgrass prairie. Journal of Applied Ecology 41:604-614.
- Fuhlendorf, S.D., D.M. Engle, J.A.Y. Kerby, and R. Hamilton. 2009. Pyric herbivory: Rewilding landscapes through the recoupling of fire and grazing. Conservation Biology 23:588-598.
- Genes, L. , Fernandez, F. A., Vaz‐de‐Mello, F. Z., da Rosa, P. , Fernandez, E. and Pires, A. S. (2018), Effects of howler monkey reintroduction on ecological interactions and processes. Conservation Biology. . doi:10.1111/cobi.13188
- Gibson, D.J. 2009. Grasses and Grassland Ecology. Oxford University Press, Oxford, UK.
- Khanina, Larisa. “Determining Keystone Species.” Ecology and Society, The Resilience Alliance, 15 Dec. 1998, www.ecologyandsociety.org/vol2/iss2/resp2/.
- Knapp, Alan K., et al. “The Keystone Role of Bison in North American Tallgrass Prairie: Bison Increase Habitat Heterogeneity and Alter a Broad Array of Plant, Community, and Ecosystem Processes.” BioScience, vol. 49, no. 1, 1999,
- Menendez, R., P. Webb, and K.H. Orwin. 2016. Complementarity of ´ dung beetle species with different functional behaviours influence dung–soil carbon cycling. Soil Biology and Biochemistry 92:142-148
- Mcmillan, Brock R., et al. “Vegetation Responses to an Animal-Generated Disturbance (Bison Wallows) in Tallgrass Prairie.” The American Midland Naturalist, vol. 165, no. 1, 2011, pp. 60–73., doi:10.1674/0003-0031-165.1.60.
- Packard, S., and C.F. Mutel. 2005. The Tallgrass Restoration Handbook: For Prairies, Savannas, and Woodlands. Island Press, Washington, DC.
- Raine, Elizabeth H., and Eleanor M. Slade. “Dung Beetle–Mammal Associations: Methods, Research Trends and Future Directions.” Proceedings of the Royal Society B: Biological Sciences, vol. 286, no. 1897, 2019, p. 20182002., doi:10.1098/rspb.2018.2002.
Pharmacogenomics in Personalized Medicine: How Medicine Can Be Tailored To Your Genes
By: Anushka Gupta, Genetics and Genomics, ‘20
Author’s Note: Modern medicine relies on technologies that have barely changed over the past 50 years, despite all of the research that has been conducted on new drugs and therapies. Although medications save millions of lives every year, any one of these might not work for one person even if it works for someone else. With this paper, I hope to shed light on this new rising field and the lasting effects it can have on the human population.
Future of Modern Medicine
Take the following scenario: You’re experiencing a persistent cough, a loss of appetite, and unexplained weight loss to only then find an egg-like swelling under your arm. Today, a doctor would determine your diagnosis by taking a biopsy of your arm and analyzing the cells using the microscope, a 400-year-old technology. You have non-Hodgkins lymphoma. Today’s treatment plan for this condition is a generic one-size-fits-all chemotherapy with some combination of alkylating agents, anti-metabolites, and corticosteroids (just to name a few) that would be injected intravenously to target fast-dividing cells that can harm both cancer cells and healthy cells [1]. This approach may be effective, but if it doesn’t work, your doctor tells you not to despair – there are some other possible drug combinations that might be able to save you.
Flash forward to the future. Your doctor will now instead scan your arm with a DNA array, a computer chip-like device that can register the activity patterns of thousands of different genes in your cells. It will then tell you that your case of lymphoma is actually one of six distinguishable types of T-cell cancer, each of which is known to respond best to different drugs. Your doctor will then use a SNP chip to flag medicines that won’t work in your case since your liver enzymes break them down too fast.
Tailoring Treatment to the Individual
The latter case is one that we all wish to encounter if we were in this scenario. Luckily, this may be the case one day with the implementation of pharmacogenomics in personalized medicine. This new field takes advantage of the fact that new medications typically require extensive trials and testing to ensure its safety, thus holding the potential as a new solution to bypass the traditional testing process of pharmaceuticals.
Even though only the average response is reported, if the drug is shown to have adverse side effects to any fraction of the population, the drug is immediately rejected. “Many drugs fail in clinical trials because they turn out to be toxic to just 1% or 2% of the population,” says Mark Levin, CEO of Millennium Pharmaceuticals [2]. With genotyping, drug companies will be able to identify specific gene variants underlying severe side effects, allowing the occasional toxic reports to be accepted, as gene tests will determine who should and shouldn’t get them. Such pharmacogenomic advances will more than double the FDA approval rate of drugs that can reach the clinic. In the past, fast-tracking was only reserved for medications that were to treat untreatable illnesses. However, pharmacogenomics allows for medications to undergo an expedited process, regardless of the severity of the disease. There would be fewer guidelines to follow because the entire population would not need to produce a desirable outcome. As long as the cause of the adverse reaction can be attributed to a specific genetic variant, the drug will be approved by the FDA [3].
Certain treatments already exist using this current model, such as for those who are afflicted with a certain genetic variant of cystic fibrosis. Additionally, this will contribute to reducing the number of yearly cases of adverse drug reactions. As with any field, pharmacogenomics is still a rising field and is not without its challenges, but new research is still being conducted to test its viability.
With pharmacogenomic informed personalized medicine, individualized treatment can be designed according to one’s genomic profile to predict the clinical outcome of different treatments in different patients [4]. Normally, drugs would be tested on a large population, where the average response would be reported. While this method of medicine relies on the law of averages, personalized medicine, on the other hand, recognizes that no two patients are alike [5].
Genetic Variants
By doubling the approval rate, there will be a larger variety of drugs available to patients with unique circumstances where the generic treatment fails. In pharmacogenomics, genomic information is used to study individual responses to drugs. Experiments can be designed to determine the correlation between particular gene variants with exact drug responses. Specifically, modern approaches, including multigene analysis or whole-genome single nucleotide polymorphism (SNP) profiles, will assist in clinical trials for drug discovery and development [5]. SNPs are especially useful as they are genetically unique to each individual and are responsible for many variable characteristics, such as appearance and personality. A strong grasp of SNPs is fundamental to understand why an individual may have a specific reaction to a drug. Furthermore, SNPs can also be applied so that these genetic markers can be mapped to certain drug responses.
Research regarding specific genetic variants and their association with a varying drug response will be fundamental in prescribing a drug to a patient. The design and implementation of personalized medical therapy will not only improve the outcome of treatments but also reduce the risk of toxicity and other adverse effects. A better understanding of individual variations and their effect on drug response, metabolism excretion, and toxicity has the potential to replace the trial-and-error approach of treatment. Evidence of the clinical utility of pharmacogenetic testing is only available for a few medications, and the Food and Drug Administration (FDA) labels only require pharmacogenetics testing for a small number of drugs [6].
Cystic Fibrosis: Case Study
While this concept may seem far-fetched, a few select treatments have been approved by the FDA for certain populations, as this field of study promotes the development of targeted therapies. For example, the drug Ivacaftor was approved for patients with cystic fibrosis (CF), a genetic disease that causes persistent lung infections and limits the ability to breathe. Those diagnosed with CF have a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene, rendering the resulting CFTR protein defective. This protein is responsible for moving chloride to the cell surface, attracting water that will then generate mucus. However, those with the mutation have thick and sticky mucus, leaving the patient susceptible to germs and other infections as the bacteria that would normally be cleared [7]. Ivacaftor is only approved for CF patients who bear the specific G551D genetic variant, a specific mutation in the CFTR gene. This drug can then target the CFTR protein, increase its activity, and consequently improve lung function [8]. It’s important to note that the G551D is only just one genetic variant out of 1,700 currently known mutations that can cause CF.
Adverse Drug Reactions
Pharmacogenomics also addresses the unknown adverse effects of drugs, especially for medications that are taken too often or too long. These adverse drug reactions (ADRs) are estimated to cost $136 billion annually. Additionally, within the United States itself, serious side effects from pharmaceutical drugs occur in 2 million people each year and may cause as many as 100,000 deaths, making it the fourth most common cause of death according to the FDA [9].
The mysterious and unpredictable side effects of various drugs have been chalked up to individual variation encoded in the genome and not drug dosage. Genetics also determines hypersensitivity reactions in patients who may be allergic to certain drugs. In these cases, the body will initiate a rapid and aggressive immune response that can hinder breathing and may even lead to a cardiovascular collapse [5]. This is just one of the countless cases where unknown patient hypersensitivity to drugs can lead to extreme outcomes. However, some new research in pharmacogenomics has shown that 80% of the variability in drugs can be reduced. The implications of this new research could mean that a significant amount of these ADRs could be significantly decreased inpatient management, leading to better outcomes [11].
Challenges
Pharmacogenomic informed medicine may suggest the ultimate demise of the traditional model of drug development, but the concept of targeted therapy is still in its early stages. One reason that this may be the case is due to the fact that most pharmacogenetic traits involve more than one gene, making it even more difficult to understand or even predict the different variations of a complex phenotype like a drug response. Through genome-wide approaches, there is evidence of drugs having multiple targets and numerous off-target results [4].
Even though this is a promising field, there are challenges that must be overcome. There is a large gap between integrating the primary care workforce with genomic information for various diseases and conditions as many healthcare workers are not prepared to integrate genomics into their daily practice. Medical school curriculums would need to be updated in order to implement information and knowledge regarding pharmacogenomics incorporated personalized medicine. This would also create a barrier in presenting this new research to broader audiences including medical personnel due to the complexity of the field and its inherently interdisciplinary nature [12].
Conclusion
The field has made important strides over the past decade, but clinical trials are still needed to not only identify the various links between genes and treatment outcome, but also to clarify the meaning of these associations and translate them into prescribing guidelines [4]. Despite its potential, there are not many examples where pharmacogenomics impacts clinical utility, especially since many genetic variants have not been studied yet. Nonetheless, progress in the field gives us a glimpse of a time where pharmacogenomics and personalized medicine will be a part of regular patient care.
Sources
- “Chemotherapy for Non-Hodgkin Lymphoma.” American Cancer Society, www.cancer.org/cancer/non-hodgkin-lymphoma/treating/chemotherapy.html.
- Greek, Jean Swingle., and C. Ray. Greek. What Will We Do If We Don’t Experiment on Animals?: Medical Research for the Twenty-First Century. Trafford, 2004, Google Books, books.google.com/books?id=mB3t1MTpZLUC&pg=PA153&lpg=PA153&dq=mark+levin+drugs+fail+in+clinical+trials&source=bl&ots=ugdZPtcAFU&sig=ACfU3U12d-BQF1v67T3WCK8-J4SZS9aMPg&hl=en&sa=X&ved=2ahUKEwjVn6KfypboAhUDM6wKHWw1BrQQ6AEwBXoECAkQAQ#v=onepage&q=mark%20levin%20drugs%20fail%20in%20clinical%20trials&f=false.
- Chary, Krishnan Vengadaraga. “Expedited Drug Review Process: Fast, but Flawed.” Journal of Pharmacology & Pharmacotherapeutics, Medknow Publications & Media Pvt Ltd, 2016, www.ncbi.nlm.nih.gov/pmc/articles/PMC4936080/.
- Schwab, M., Schaeffeler, E. Pharmacogenomics: a key component of personalized therapy. Genome Med 4, 93 (2012). https://doi.org/10.1186/gm394
- Adams, J. (2008) Pharmacogenomics and personalized medicine. Nature Education 1(1):194
- Singh D.B. (2019) The Impact of Pharmacogenomics in Personalized Medicine. In: Silva A., Moreira J., Lobo J., Almeida H. (eds) Current Applications of Pharmaceutical Biotechnology. Advances in Biochemical Engineering/Biotechnology, vol 171. Springer, Cham
- “About Cystic Fibrosis.” CF Foundation, www.cff.org/What-is-CF/About-Cystic-Fibrosis/.
- Eckford PD, Li C, Ramjeesingh M, Bear CE: CFTR potentiator VX-770 (ivacaftor) opens the defective channel gate of mutant CFTR in a phosphorylation-dependent but ATP-independent manner. J Biol Chem. 2012, 287: 36639-36649. 10.1074/jbc.M112.393637.
- Pirmohamed, Munir, and B.kevin Park. “Genetic Susceptibility to Adverse Drug Reactions.” Trends in Pharmacological Sciences, vol. 22, no. 6, 2001, pp. 298–305., doi:10.1016/s0165-6147(00)01717-x.
- Adams, J. (2008) Pharmacogenomics and personalized medicine. Nature Education 1(1):194
- Cacabelos, Ramón, et al. “The Role of Pharmacogenomics in Adverse Drug Reactions.” Expert Review of Clinical Pharmacology, U.S. National Library of Medicine, May 2019, www.ncbi.nlm.nih.gov/pubmed/30916581.
- Roden, Dan M, et al. “Pharmacogenomics: Challenges and Opportunities.” Annals of Internal Medicine, U.S. National Library of Medicine, 21 Nov. 2006, www.ncbi.nlm.nih.gov/pmc/articles/PMC5006954/#idm140518217413328title.